id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
253589578
pes2o/s2orc
v3-fos-license
A Priori L2-Error Estimates for Approximations of Functions on Compact Manifolds Given a C2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal{C}^{2}}$$\end{document} -function f on a compact riemannian manifold (X,g) we give a set of frequencies L=Lf(ε)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${L=L_{f}(\varepsilon)}$$\end{document} depending on a small parameter ε>0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\varepsilon > 0}$$\end{document} such that the relative L2-error ‖f-fL‖‖f‖\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\frac{\|f-f^{L} \|}{\|f\|}}$$\end{document} is bounded above by ε\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\varepsilon}$$\end{document}, where fL denotes the L-partial sum of the Fourier series f with respect to an orthonormal basis of L2(X) constituted by eigenfunctions of the Laplacian operator Δ associated to the metric g. Introduction The origin of this work was to give an answer to the following quite naive question: Given a 2π-periodic function f (θ) and a fixed ε > 0, is it possible to find an explicit subset of frequencies L = L f (ε) ⊂ Z for which we have the following a priori bound for the relative Here f L denotes the partial sum f L (θ) = ∈L f e i θ , f = 1 2π 2π 0 f (θ) e −i θ dθ are the Fourier coefficients and g = 1 2π 2π 0 |g(θ)| 2 dθ 1 2 is the L 2 -norm of a function g(θ). It turns out that such a bound can be explicitly constructed using only the quantities f , f , f and ε by an elementary application of the Chebyshev inequality in probability theory. In fact, the context in which we was first interested was a little bit technically involved but heuristically analogous: we wanted to obtain a bound for the number of significant Fourier coefficients of an spherical function Main Result In order to fix the ideas, we fix an oriented compact riemannian manifold (X, g) and we consider the (scalar or hermitian) product in the space of (real or complex valued) square integrable functions on X defined by where dV is the volume element and we denote by A 0 (X) its L 2 -completion. The riemannian metric g over T X extends to every tensorial fiber bundle over X and in particular to the vector bundle Ω k (X) of differential k-form. On the other hand, we recall that every scalar product on a real vector space V extends in a natural way to a hermitian product on its complexification V ⊗ C. In this way, we can define a (scalar or hermitian) product in the space of (real or complex valued) differential k-forms on X by means of and we can consider its corresponding L 2 -completion, which will be denoted by A k (X). We consider the exterior derivative operator d: A 0 (X) → A 1 (X) and its formal adjoint d * : A 1 (X) → A 0 (X) with respect to the (scalar o hermitian) products introduced below. It is well known that the Laplacian operator Δ := d * d + dd * = d * d over A 0 (X) is self-adjoint, positive definite and it has discrete spectrum. Consequently, there exists a countable orthonormal basis {ψ } ∈Λ of eigenfunctions of Δ. Thus, there exists a function λ: Λ → R + , → λ , such that Δψ = λ ψ for all ∈ Λ. For every f ∈ A 0 (X) we consider Vol. 12 (2015) Error Estimates for Approximations 53 its Fourier series ∈Λ f ψ with f = f, ψ ∈ C. For every subset L ⊂ Λ we define the partial sum of f over L as Our main result is the following. In fact, L f (ε) can be chosen as the preimage by λ : where The proof is a direct application of the following two statements. Lemma 2. Consider the (non-bounded) linear operators Then for every f ∈ A 0 (X) for which D j (f ), j = 1, 2, are defined the following relations hold: Moreover, D 1 f 2 = X ∇ g f 2 g dV , where ∇ g and · g are the gradient operator and the norm with respect to the metric g. Furthermore the map λ: Λ → R + has discrete image and finite fibers. Proof. Since D 2 ψ = λ ψ it follows that D 2 f = ∈Λ f λ ψ and consequently D 2 f 2 = ∈Λ |f | 2 λ 2 . On the other hand, for all , ∈ Λ we have dψ , dψ = ψ , d * dψ = ψ , Δψ = ψ , λ ψ = λ δ . Since The other expression for D 1 f follows from the well-known formula g(df, df ) = g(∇ g f, ∇ g f ). Finally, the last claim follows from the fact that the image of λ is the spectrum of the Laplacian Δ and every eigenvalue has finite multiplicity. ) satisfies the error estimates (2), where Moreover, if λ has discrete image and finite fibers then the partial sum (1) corresponding to L = L f (ε) has only a finite number of terms for every ε > 0. Proof. Given f ∈ A 0 , consider a discrete random variable Z satisfying whose moments of order 1 and 2 are, respectively, thanks to Relations (4). The standard deviation of Z is given by On the other hand, given any pair of real numbers By definition of (5), it follows that L ± f (ε) = E(Z) ± ε −1 σ(Z). We conclude the proof by applying Chebyshev's inequality given by χ(ϕ) = e iϕ and the metric g = dϕ 2 4π 2 whose volume element is dV = dϕ 2π . The Laplacian can we written as Δ = −∂ 2 ϕ . An orthonormal eigenbasis of complex functions is given by ψ : given by χ(ϕ, θ) = (cos ϕ sin θ, sin ϕ sin θ, cos θ) and the metric g = 1 16π 2 sin 2 θ dϕ 2 + dθ 2 induced by that of R 3 , whose volume element is dV = sin θ 4π dϕ dθ. We consider the orthonormal basis given by the harmonic spherical functions ψ m : Error Estimates for Approximations 55 are the associated Legendre polynomials, see for instance [1]. and λ = ( + 1). Consequently, We can improve the choice of the set of frequencies L f (ε) for which the required estimate (2) is already fulfilled. One can proceed in the following way. Theorem 6. Assume that we have already computed the Fourier coefficients of f for a given subset I ⊂ Λ. Then the inequality (2) also holds for the new set of frequencies Proof. Indeed, if we denote D 0 = Id, it follows from Parseval identity and Relations (4) that for each j = 0, 1, 2 the following equalities hold is a sum of n gaussian distributions with amplitudes a i , means μ i and standard deviations σ i , see Fig. 1. We deal first with the unimodal case n = 1, a 1 = 1, μ 1 = 6, σ 1 = 1. We take I = λ −1 (J). Notice that the two first estimated intervals are approximately centered at μ 1 = 6. Now, we treat the bimodal case n = 2, a 1 = 1.5, a 2 = 1, μ 1 = 2, μ 2 = 13, σ 1 = σ 2 = 1 and I = λ −1 (J). To finish the theoretical part of the paper, we point out that the compactness assumption on X is necessary to state Theorem 1 in its present form, but there exists an alternative statement on the simplest non-compact manifold X = R n : belongs to L 2 (R n ) and it satisfies the following inequality The proof is completely analogous to the one of Theorem 1, using the Fourier transformf and its reconstruction formula instead of Fourier series. In this case, Λ = R n and all the summations ∈Λ a must be replaced by R n a(ξ)dξ. The analogues of Relations (4) follow from the well-known identity ∂f ∂xj (ξ) = 2iπξ jf (ξ) using the map λ: R n → R + given by λ(ξ) = 4π|ξ| 2 , where |ξ| denotes the euclidian norm of a vector ξ ∈ R n . In fact, as in the precedent version, L = λ −1 (I) for some compact interval I ⊂ R. Remark 9. The uncertainty principle for f ∈ L 2 (R n ) asserts that D 0 (f )D 0 (f ) ≥ C n , see [3], where C n is some explicit positive constant depending only on the dimension n and The uncertainty principle applied to f can be interpreted as a lower bound for the midpoint μ of the interval I: Analogously, by applying the uncertainty principle to the partial derivatives Application: Smooth Approximations of Polyhedral Objects Theorem 1 can be applied to the problem of finding a smooth approximation of a geometric object, typically a curve or a surface, from which we only know a finite set of points. Closed Curves Let γ(s) = (x j (s)) n j=1 be a closed curve in R n of class C 2 parametrized by arc length s ∈ [0, L], where L is its total length. Assume we only know a finite number of consecutive points , with p 0 = p N , and we pretend to give an explicit parametrizationγ(s) approximating γ(s) with a relative error less than ε > 0, i.e. γ −γ ≤ ε γ , using for this the hermitian L 2 -product defined by f, g = 1 L L 0 fḡ ds. We consider the orthonormal basis given by the functions ψ (s) = e i2π s L , varying ∈ Z, and the corresponding Fourier series Theorem 1 gives us the bound . In order to obtain discrete counterparts of the continuous quantities involved in the precedent formula we proceed as follows. For each k = 1, . . . , N we define ds k = p k − p k−1 and s k = s k−1 + ds k taking also s 0 = 0. Then we can discretize the integrals involved above obtaining the following estimates for (a) the length L s N and the Fourier coefficients x j Besides the error ε given by considering only a finite number of Fourier terms, this procedure introduces two new sources of error, namely the approximations made in (a) and (b). Nevertheless, since the frequencies set Λ is discrete the method of choosing the subset L f (ε) is robust in the following sense. First, by perturbing slightly ε if necessary, we can assume that L ± f (ε) Vol. 12 (2015) Error Estimates for Approximations 59 are not close to integer numbers. Then, if the distances ds k between consecutive points are small enough then the set L f (ε), obtained by the discretization method described in (b), does not change. Star-Shaped Surfaces Let S ⊂ R 3 be a closed surface which is star-shaped with respect to the origin, i.e. for every u ∈ S 2 the half line {λ u, λ ∈ R + } cuts S in a unique point r(u)u determined by the radial function r: S 2 → R + which we can express as r = ( ,m)∈Λ r m ψ m according to the notations introduced in Example 2, where the coefficients where T i is the center of mass of the triangle T i , ψ m : R 3 → R is a degree polynomial extension of ψ m : S 2 → R and A(T i ) is the area of the spherical triangle obtained from T i by radial projection onto S 2 . In the same vein, the squared norm r 2 = 1 4π U r(χ(ϕ, θ)) 2 sin θ dθ dϕ can be approximated by In order to obtain a discrete counterpart of we need to consider the parametrization σ(ϕ, θ) = r(χ(ϕ, θ))χ(ϕ, θ) of S. A straightforward computation using that χ, ∂ θ χ and ∂ϕχ sin θ is a direct orthonormal basis, we obtain that the outer normal unitary vector N : S → S 2 of S satisfies the equality and consequently where N i is the outer normal unitary vector to the triangle T i , which can be easily computed from the given triangulation of S. Finally, to compute a discrete counterpart of the squared norm of the spherical laplacian of r, we apply the following formula given in [4] for the discrete version of the Laplacian of a function f defined in the vertex set of a triangulated surface M ⊂ R 3 : Here N (i) denotes the set indexing the vertex adjacent to p i . If j ∈ N (i) then α ij and β ij are the opposite angles to the edge p i p j . In our case we must take M = S 2 and f = r. Thus, is the set of vertex of the triangulation, T ij is the unique triangle containing the oriented edge p i p j and α ij and β ij are the opposite angles to the edge p i p j after projecting the triangulation radially onto S 2 . Example 10. The triangulation of the surface of the left atrium of a human heart shown in Fig. 2 has N = 4,000 triangles and V = 2,002 vertex. From Vol. 12 (2015) Error Estimates for Approximations 61 Table 1, we list the L 2 -norm of the degree homogeneous part r = m=− r m ψ m of r for 0 ≤ ≤ 17. In Fig. 3 we represent graphically these norms in function of .
2022-11-18T14:25:48.945Z
2014-02-05T00:00:00.000
{ "year": 2014, "sha1": "343dcba9de5767f79a12949800a668c61b8dbb41", "oa_license": "CC0", "oa_url": "https://ddd.uab.cat/pub/artpub/2014/123652/medjoumat_a2014.pdf", "oa_status": "GREEN", "pdf_src": "SpringerNature", "pdf_hash": "343dcba9de5767f79a12949800a668c61b8dbb41", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [] }
214635822
pes2o/s2orc
v3-fos-license
Evaluation of Phacoemulsification Cataract Surgery Outcomes After Penetrating Keratoplasty BACKGROUND: Cataract is one of the reasons which causes impaired visual acuity (VA) of the eyes after penetrating keratoplasy (PK), which can be treated by cataract surgery after PK or triple procedure. Cataract surgery after PK has advantages that parameters of the eyes such as axial length, anterior chamber depth (ACD) as well as corneal curvature are stabilized after removing all sutures postoperatively, and intraocular lens (IOL) power can be calculated correctly. Therefore, postoperative VA will be improved significantly. In Vietnam, there have not been any study about cataract surgery after PK, therefore we conduct this research. AIM: To evaluate the outcomes of phacoemulsification cataract surgery following primary PK. METHODS: Non-randomized controlled intervention study. Ninteen eyes (19 patients) that underwent phacoemulsification plus IOL insertion after initial PK in Cornea department, Vietnam National Institute of Ophthalmology, from December 2013 to September 2014. RESULTS: All patients presented with reduced VA, including 17 eyes (89.9%) with VA ≤ 20/200, mean astigmatism was 7.9 ± 1.0 D. Clear corneal grafts in 16 eyes while corneal opacity was seen in 3 eyes. All eyes with cataract were diagnosed from grade 2. After cataract surgery, improved VA > 20/200 was achieved in 72.22% of cases. There was a markable reduce of postoperative astigmatism with 1.8 ± 0.8 D (p < 0.05). However, the immunologic graft reaction was presented in one eye, and two edematous corneas also reported after cataract surgery. After treatment, there was one cornea achieved its clarity. CONCLUSION: Phacoemulsification cataract surgery following initial PK showed good outcomes with improved postoperative VA, reduced astigmatism, and the ultimate graft survival rate was high. Introduction Cataract is considered as one of the reasons that cause impaired VA of the eyes after penetrating keratoplasy (PK). It could be occurred before or after the initial PK. There are several reasons which cause postoperative cataract, that is an accelerated preexisting cataract, steroid-induced cataract or corneal graft rejection [1], [2]. However, whenever penetrating keratoplasty is considered and cataract is present, one must decide whether to remove the cataract at the same time as corneal transplantation. The arguments suggested that an advantage of a triple procedure (which performing PK, extracapsular cataract extraction as well as intraocular lens (IOL) implantation as one-stage surgery) are less expense for one combined procedure [3], less damage to the endothelium of the transplanted cornea from subsequent cataract surgery and faster visual improvement [4], [5]. However, a triple procedure has some potential intraoperative complications that encountered during the open-sky technique, for example, expulsive hemorrhage, IOL implantation failure due to posterior capsule rupture, prolapse of the vitreous body [6], [7]. Particullarly, the main drawback of this procedure is that intraocular lens (IOL) power calculation could not performed accurately, resulting in a high degree of myopia or hyperopia postoperatively [8], [10]. The reason is that axial length, ACD and corneal curvature could be considerably changed after keratoplasty, affecting the accurate biometric information. Therefore, the postoperative keratometric readings can only be estimated, and estimates can be quite inaccurate. This drawback will be fixed by cataract surgery after PK when all these parameters have been stabilized after removing all sutures, and IOL power can be https://www.id-press.eu/mjms/index calculated correctly. Corneal astigmatism can be reduced with corneal incisions placed in the steepest meridian in cataract surgery. Therefore, postoperative VA will be improved significantly. However, cataract surgery after initial PK is a difficult surgical procedure which requires experienced surgeons to do. The aim of our study is to assess the phacoemulsification cataract surgery outcomes after primary PK in Corneal Department -VNIO. Patients This study got an approvement from the Scientific Board of VNIO. Patients received phacoemulsification and IOL implantation who underwent a primary PK were included. 19 eyes (19 patients) were included in this study and they all signed an informed consent forms before surgeries. All phacoemulsification cataract surgeries were performed from December 2013 to September 2014 at the Cornea department, VNIO. It was a non-randomized controlled intervention study. Patients were asked about their medical records and medical history was collected as well. Patients were examined by a complete ophthalmic examination, such as VA by the standard Snellen chart, IOP measurement with applanation tonometry, slit-lamp biomicroscopy, fundus examination after dilating pupil, A-scan ultrasound for axial length, and keratometric readings measured by topography. Preoperative evaluations including the status of the corneal graft (clear, edema, endothelial decompensation, endothelial immunological rejection, corneal incision), anterior chamber, iris, pupil shape, and the grades of cataract. IOL power calculation used the SRK/T formula and targeted emmetropia. Keratoplasty All PKs were performed under general anesthesia. The donor corneas for the PKs taken from our eye bank and imported from CorneaGens eye bank. We used the endothelial punch to prepare the corneal donors and Teflon block with a diameter of 0.50 mm that greater than the recipient bed. Recipient corneas were trephined using handle corneal trephines. 16-bite interrupted sutures using 10-0 nylon were placed. Then we made the suture adjustment to reduce the corneal astigmatism post operation. Cataract Surgery Nuclear hardness was defined using the Emery-Little lens opacities classification system. Topical anesthesia with 2% Alcain was administered three times before performing the surgery. Making main corneal incision by using a 2.8 mm knife, and the position of corneal incision was in the steepest meridian to reduce astigmatism. Then, the next step is to put a viscoelastic agent into anterior chamber, followed by making the second incision in a position of 90 degrees away from the main incision on cornea. A continuous curvilinear capsulorhexis is made about 6 mm using a rhexis forceps and after that, injecting BSS during hydro dissection to isolate the lens cortex from the capsule. After that, to sculpt the nucleus using Phaco tip, and the energy using for phacoemulsification is often low not to cause endothelial damage. Then, IOL implantation in the capsular bag will be done, followed by hydration of the corneal incision with BSS or 10-0 nylon suture, if needed. Topical antibacterial and steroid eyedrop were administered four times per day, then it was tapered over several months. Postoperative examination Patients will be evaluated on the first postoperative day, one week, one, three and sixmonth postoperation. At each follow-up visit, operated eyes were examined: VA, IOP, refractive error, astigmatism, corneal graft clarity, graft rejection signs, infection, IOL centeration in the capsular bag and posterior capsule's status. Good results will be corneal graft clarity, stable in-the-bag IOL, posterior capsule clarity, improved VA. Moderate results are blurry graft, stable in-the-bag IOL, posterior capsule clarity or light posterior capsular opacity, little increased VA or unchanged VA. Bad results are graft edema, IOL dislocation, severe posterior capsular opacity, reduced vision compared with pre-op VA. Results There were 19 eyes of 19 patients had phacoemulsification plus IOL implantation after original PK. The mean age in our study group was 49.3 ± 17.4 years old, patients aged over 50 accounted for the highest rate (57.95) and the ratio of male/female was the same. Preoperative IOP was all normal, ranged from 12.05 ± 1.6 mmHg, using Icare tonometer. The mean keratometric was highest at 41.67 D and lowest at 29.48 D. The mean preoperative astigmatism was 7.9 ± 1.0 D with the highest was 12 D, and the lowest was 2.2 D. Characteristics of cataract in pre-op examination showed 4 eyes had complete cataract due to herpes simplex keratitis and 3 eyes had therapeutic keratoplasty. Table 2 and Table 3). The first week post-operation, 1-month postoperation and 3 months post-operation, corneal clarity was noted in 17 eyes, 16 eyes, 16 eyes, respectively ( Figure 1). Figure 1: Cataract in PK eye; A) before cataract surgery; B) after cataract surgery After 6 months post-operation, there were 8 eyes remaining the corneal clarity. According to Table 3, at the presentation of the first postoperative week, there was one eye had endothelial immunological rejection because the patient could not follow the treatment, stop using steroid eyedrop for one week (Figure 2). There were two edematous corneas from the 1st post-op day to 3 months follow-up visit. Then, after treatment, the graft status of this eye was back to corneal clarity, improved vision, while the other eye had endothelial decompensation. IOL position in all examinations was in the capsular bag. The Table 4 shows the post-operative astigmatism was decreased to 1.8 ± 0.8D. The result was statistically significant difference with p < 0.05. Discussion Cataract is an age-related disease considered as a leading cause of blindness worldwide. Apart from age-related factor, corneal transplantation can also cause the cataract formation significantly after surgery. Rathi VM studied on 184 eyes underwent PK as the first surgery, reported that 45 eyes developing cataract a few years later, accounted for 24.45% of cases. Particularly, of 45 eyes, 31 had cataract in their first corneal transplantation year [1]. Therefore, cataract surgery is an essential requirement to provide better vision for transplanted eyes. Regarding cataract surgery outcomes, final VA is an important criterion. In our preoperative evaluation, there were 17 of 19 eyes (89.47%) had the b https://www.id-press.eu/mjms/index VA ≤ 20/200, 13 of which considered profound visual impairment with VA of counting fingers lower than the distance at 3 meters. These data showed remarkably very inferior vision of all patients before cataract surgery. After compared with the high rate of postoperative refractive error in a triple procedure, we performed cataract surgery after PK to achieve better refraction outcomes. In our findings, with the stabilization of keratometric readings after removing all sutures, we had a reliable calculation of IOL powers. The mean keratometric reading was 41.76 D. However, there were only two eyes had keratometric readings ranged from 40 -44 D. Given the majority of keratometric readings in transplanted eyes were not identified in the normal range (40 -44 D), our study showed similar findings with a study of Dietrich T, Duran JA [11], [12]. These findings support the benefits of phacoemulsification cataract surgery after initial PK (2-stage procedure), compared with PK combined with extracapsular cataract extraction and IOL insertion (a triple procedure). In previous studies, there was a high astigmatism generally observed after a triple procedure [9], [13] up to 17.0 D in a study of Mohammad-Ali Javadi [14]. Because of the accurate IOL powers calculation using keratometric readings of the transplanted clear cornea in a 2-stage procedure, this technique is preferred in terms of better postoperative refraction. This is also the main drawback of a triple procedure in an attempt to achieve optimal postoperative target refraction. Additionally, in our study, the mean astigmatism was 6.35 D (ranged from 2.2 D to 12 D). Based on corneal topography, we reduced corneal curvature by placing a corneal incision in phacoemulsification surgery at the highest refractive meridian, thereby reducing astigmatism. This is a significant benefit of a 2-step procedure compared with a triple procedure. In our study, there was no dislocation of the inserted IOL or endophthalmitis was documented. At the first postoperative day, the uncorrected VA was 20/200 or even better that noted in 10 of 19 eyes (52.6%). This VA increased gradually over time. VA in 13 of 18 eyes stabilized completely after 3 months follow-up, in which 6 eyes had VA of above 20/100. Binder's study reported the similar result of VA, that is VA in 19 of 33 eyes (57%) was 20/100 or better [15]. Hsiao CH showed the similar result in 22/24 eyes (81%) [16]. Some authors reported higher results such as Nagra PK with 13/29 eyes achieving 20/70 VA or better, Geggel HS 's study with 91% (20/22) eyes was 20/100 or better. The explanation for the difference between our results and other studies is that the majority of corneal grafts in other studies was optical transplantation, while our PKs were therapeutic keratoplasty. Postoperative astigmatism reduced 1.8 ± 0.8 D in comparison with preoperative values, and this indicated statistically significant difference (p < 0.05). Our finding of astigmatism is similar to other reports, that are HsiaoC (1.55 ±1.3 D) [16], Shi WY (1,0D) [17], Geggel HS (1.96 D) [18], FeiziS (3.03 D) (P < 0.05) [19], Nagra PK (2.77 ± 2.36 D), Dietrich T (3.3 ± 2.1 D) [11]. The reduced postoperative astigmatism in our study was partly due to the selection of the corneal incision at the steepest meridian, and the removal of sutures at the time of follow-up visits as well. The study of Kamal A. M. Solaiman conducted from 2014 -2017 also reported the similar result to our study with better visual outcomes. Therefore, even though a triple approach using a standard keratometry value, or keratometry values in the fellow normal eye still has the major drawback of refraction [4], [5], [20]. In postoperative evaluation, we found 17 eyes (89.47%) had clear corneal grafts, while two eyes presented graft edema. There were two edematous corneas experienced difficulties during cataract surgery due to scarring, vascularization and pupil posterior synechia, causing a prolonged time of surgery. After treatment, these two eyes were stable without any presence of endothelial decompensation. Unfortunately, one eye had graft rejection episodes resulting in corneal opacity, while other grafts showed graft transparency at the last visit. To control postoperative intraocular inflammation after corneal transplantation is essential to maintain the graft clarity and prepare well for the cataract surgery later. Once postoperative inflammation is controlled well, the situation of capsular adhesions and pupillary membranes hardly happens. Phacoemulsification cataract surgery had some difficulties due to the tough observation through scarring transplanted eyes, irregular corneal astigmatism or unstable anterior chamber, pupil posterior synechia or even contracted pupil. The phacoemulsification cataract surgery with IOL implantation was successfully done with no complications in all our patients. A study of Binder P.S also presented a similar outcome with 100% cases that in-the-bag IOL successfully inserted. posterior capsular Opaque presented in 4 of 19 eyes (21.05%), 2 of which showed early opaque in 1-month post-op. However, there were no cases needed YAG laser posterior capsulotomy. In summary, the findings in our study showed the remarkable safety and efficacy of phacoemulsification cataract surgery after initial PK. IOL stayed safely in the capsular bag, and the rate of posterior capsular opaque was not significant. Reduced astigmatism leads to significant visual improvement, and the ultimate graft survival rate was good. However, we need to do long-term follow-up examinations for patients to identify more accurate long-term outcome.
2020-03-12T10:16:01.537Z
2019-12-20T00:00:00.000
{ "year": 2019, "sha1": "2a1d9bf93fda8c970743d33adb8bcdb67ad1d6cf", "oa_license": null, "oa_url": "https://doi.org/10.3889/oamjms.2019.379", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "803df0f5070ff6d74bb456654d7b11111184efcd", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
204241427
pes2o/s2orc
v3-fos-license
Research and instruction services for online advanced practice nursing programs: a survey of North American academic librarians Introduction The increasing popularity of distance education has led many advanced practice nursing (APN) programs to shift to either online or hybrid models. To meet the needs of these students, some nursing librarians are using technology for virtual research and instruction. This study was designed to assess the extent to which librarians in North America are providing virtual research and instruction services for APN students. Methods An institutional review board–approved, ten-question survey was developed to determine how librarians are providing services for APN students. It was announced in October 2017 through several health sciences librarian email discussion lists. The survey ran for four weeks. Data were analyzed using Qualtrics and Excel. Results Eighty complete responses were received. The majority of respondents (66%) indicated that their universities’ APN programs were conducted in a hybrid format. Sixty-seven percent also indicated that they provide library instruction in person. Most librarians indicated that they have provided research assistance through some virtual method (phone or email, at 90% and 97%, respectively), and some have used online chat (42%) and video chat (35%). A strong majority of librarians (96%) indicated that they felt comfortable using technology to provide research assistance and instruction. Conclusion Opportunities exist to leverage technology to provide virtual research assistance and instruction. Greater promotion of these alternative methods can supplement traditional in-person services to provide greater flexibility for graduate nursing students’ busy schedules. Some outreach may be necessary to highlight the advantages of virtual services, and further research is needed to identify other barriers and potential solutions. INTRODUCTION Advanced practice nurses (APNs)-which include nurse practitioners (NPs), nurse anesthetists, nurse midwives, and clinical nurse specialists (CNSs)-are increasingly important health care providers in North America who help fill shortages in primary care, especially in rural and other underserved areas [1]. Allowing nurses with advanced training to take on some of the responsibilities of physicians not only reduces the impact of physician shortages, but also improves the quality and efficiency of the health care system by providing comparable services at a lower cost [2]. In Canada, the number of NPs has tripled since 2006, with most specializing in primary care or family practice. The role of NPs has also become more standardized across provinces, so that the basic functions of independent diagnosis and prescription are in the scope of practice of all Canadian NPs [3]. In the United States, the scope of practice of NPs is limited by different state See end of article for supplemental content. legislatures. As of 2015, twenty-two states, four territories, and the District of Columbia give NPs full practice authority, allowing them to diagnose and treat patients independently of a physician [4]. Regardless of their scope of practice, however, APNs play vital roles in health care delivery across North America [1,2]. To create an APN workforce, many North American nursing colleges and universities offer graduate degrees in advanced practice. In 2013, 420 institutions offered some type of APN program in the United States, representing 51.6% of the institutional members of the American Association of Colleges of Nursing (AACN) [5,6]. In 2016, 28 Canadian schools offered at least 1 APN program, representing 20.4% of all nursing schools in the country [7]. The average graduate nursing student is demographically very different from the average undergraduate nursing student and, therefore, has different educational needs and preferences. As graduate nursing students are generally registered nurses who might work nights and weekends [8], an asynchronous education format can allow students to complete programs with some schedule flexibility. Graduate nursing students are often returning to school after several years of working in the field and, thus, are older than undergraduate nursing students [8]. According to the National League of Nursing, 13% of undergraduate bachelor's degree in nursing (BSN) students in 2015-2016 were over 30 years old, whereas 45% of master's degree students and 82% of doctoral nursing students were over 30 years old [9]. Additionally, as more APN programs are offered online, geographic proximity to campus is becoming less of a factor for potential APN students. Research indicates that rural students are more likely to enroll in online APN programs than in on-site programs [10]. Therefore, in-person graduate nursing education and accompanying library instruction may be becoming less commonplace. Also, distance learning-which puts less emphasis on lectures and more emphasis on active learning, individual and group projects, and critical thinking-also fits with the shift in pedagogical style in nursing education from a teacher-centered model to a student-centered model [11]. Librarians have traditionally been educational partners for nursing students and faculty, teaching concepts such as critical thinking and project research, which help to complement the nursing curriculum. As APN programs migrate into distance format, nursing librarians who serve these programs should shift their reference and instruction services online. In its Standards for Distance Learning Library Services, the Association for College and Research Libraries (ACRL) states that "library services offered to the distance learning community must be designed to meet a wide range of informational, instructional, and user needs, and should provide some form of direct user access to library personnel," including online services for instruction, research, and consultation [12]. The accrediting bodies of nursing programs such as the National League for Nursing Accrediting Commission (NLNAC) and the AACN also mention the need to provide library services to all nursing students regardless of their location [13]. To provide equitable services for distance students, librarians can employ technologies to offer reference and instruction services online. Reference services can be offered synchronously via telephone, chat, or video-conferencing software as well as asynchronously through email, and instruction services can be provided synchronously through video conferencing as well as asynchronously through recorded materials. Both reference and instruction can also be conducted through a learning management system (LMS), such as Blackboard. A review of the existing literature provides limited information on how librarians have provided services for online graduate nursing programs. In a report on liaison librarian work with graduate nursing students in distance programs at the University of Kansas Medical Center, Whitehair described providing limited in-person instruction during orientations but in-depth research instruction through video conferencing software. Also, reference services were provided to individuals through a variety of modes: in person, online conferencing, instant messaging, and phone [14]. Gebb and Young of Frontier Nursing University, which offers online APN programs, described an inperson instruction session focused on mobile resources for point-of-care during a multiday clinical intensive [15]. Several other articles described how nursing librarians have embedded resources and services into their nursing schools' LMSs. Guillot et al. and Sullo et al. reported embedding instruction and reference services for graduate-level nursing courses in Blackboard [16,17], and other authors reported similar efforts with undergraduate nursing students [18][19][20]. However, the authors found no published studies that sought to determine the prevalence of online reference and instruction services offered by librarians working with graduate nursing programs across North America. To address this gap, we aimed to determine the extent to which nursing liaison librarians in the United States and Canada have moved their reference and instruction services for APN students online as their programs have transitioned into hybrid or fully online formats. METHODS To determine the extent to which librarians provide reference and instruction services for APN distance programs, we created a ten-question survey (supplemental Appendix A). No personal information was obtained by the survey beyond respondents' geographical locations. The Stony Brook University Institutional Review Board assessed the project and exempted it from further review (CORIHS 2017-4216-F). The survey was administered through Stony Brook University's Qualtrics account [21] in October 2017 and was open for 4 weeks. We publicized the survey through several mailing lists that health sciences librarians commonly use: the Medical Library Association's (MLA's) MEDLIB-L, the Canadian Health Libraries Association/Association des bibliothèques de la santé du Canada's CANMEDLIB, the email discussion lists of 4 MLA chapters (Southern, Mid-Atlantic, New York-New Jersey, and Philadelphia Regional), and MLA's Nursing and Allied Health Resources Section email discussion list. As MEDLIB-L alone has approximately 1,700 subscribers and CANMEDLIB-L has more than 400 subscribers, we believed that we were able to reach most North American health sciences librarians through this distribution method. Additionally, we promoted the survey through social media by making several announcements on one author's personal Twitter account using the #medlibs hashtag, which many health sciences librarians follow. After the survey closed, the complete data set was exported from Qualtrics to a CSV file. This spreadsheet was used in Microsoft Excel to allow easier sorting of the responses and the creation of separate datasets for the United States and Canada [22]. RESULTS Of the 105 survey respondents, 90 indicated that their institution had an APN program, allowing them to proceed with the remainder of the survey. Ten of these 90 respondents did not fully complete the survey. Therefore, a total of 80 respondents completed the survey. Most (n=71) respondents were from the United States, and fewer (n=9) were from Canada. The respondents represented 35 states and provinces, with California and Massachusetts being the most well-represented states (supplemental Appendix B). Based on the previously cited total number of schools with APN programs (420 in the United States and 28 in Canada), this represented a de facto response rate of 17% of schools in the United States and 32% of schools in Canada. Nearly all respondents (98%) indicated that their institutions offered an NP program (supplemental Appendix C). Additionally, 30% of respondents were from schools with a nurse anesthetist program, 30% from schools with a CNS program, and 19% from schools with a nurse midwife program. Some (15%) respondents also indicated that their schools offered another type of APN program. Regarding degrees granted by the APN programs, 81% and 83% of respondents indicated that their schools offered doctoral and master's degrees, respectively; 36% indicated that their schools offered a graduate APN certificate; and 5% indicated that their schools offered another type of degree. Most (66%) respondents indicated that their institution's APN program was offered in a hybrid format, 16% were from schools with traditional inperson classes for the APN programs, 9% stated that their schools' programs were completely online, and 9% stated that they were not sure of the format offered at their schools. For this question, we found a substantial difference between Canadian and US respondents. More than half of the Canadian respondents (56%) indicated that the APN programs at their schools were offered in traditional in-person format, 22% stated that their schools' programs were in hybrid format, and 11% indicated that their school's program was entirely online. The US responses for this question were 11% traditional, 71.8% hybrid, and 8.5% completely online. Most respondents indicated that they offered traditional reference services to APN students during the previous year, with 97% offering email reference services, 91% offering in-person reference services (drop-in or by appointment), and 90% offering phone reference services. Less than half of respondents stated that they provided some form of online chat (42%) or video chat (35%) reference services. When asked about the most common form of reference service used by students, most respondents indicated in-person (40%) or email (42%) reference, whereas fewer respondents indicated video chat (6%), phone (5%), or online chat (4%) reference. When asked about instruction provided to APN students, most respondents (81%) indicated teaching in-person classes, whereas fewer taught an online class with video-conferencing software (35%) or taught through some form of chat without audio or video (e.g., chat room, 14%). Some respondents (10%) had not taught any APN classes within the past year. When asked about the most common format for instruction, two-thirds indicated that they generally taught in person. The librarians indicating online chat (1%) or online video (10%) as their most common instructional format represented a much smaller fraction of those responding. When asked if their discomfort with technology was a barrier to offering online reference or instruction services, nearly all respondents (96%) expressed comfort with the requisite technology, with only 4% of respondents indicating they were slightly uncomfortable and none reporting moderate or extreme discomfort. Most respondents (83%) stated that they felt extremely or moderately comfortable with the requisite technology. DISCUSSION Our results reveal what seems to be a contradiction in how nursing librarians provide instruction services to graduate nursing students. With 75% of respondents indicating that their institutions offered APN programs either completely online or in a hybrid format, one might expect that the modalities of reference and instruction services that librarians offer would be compatible with distance education. Although this seemed to be true for reference services, with most respondents regularly providing email and phone reference and a fair number providing video or chat reference, two-thirds of respondents indicated that they most commonly provided instruction in-person, not online. This demonstrates a disconnect between best practices for distance library services and the frequent reality of in-person instruction. One could speculate that this is the result of librarian discomfort with online instruction technologies; however, the vast majority of respondents stated that they did not perceive any personal barriers to using online technology for reference and instruction. Our survey results suggest that online reference is fairly common for graduate nursing programs. One hypothesis as to why it is not more pervasive is that APN students may opt for in-person research assistance. Librarianship could still be considered by many users to be a face-to-face business, and student insecurity with using databases and crafting detailed search strategies could lead to more requests for in-person appointments. Depending on the technology available to the librarian and the student, adequate demonstrations of the proper use of databases might be impossible without meeting in person. Further, both the librarian and the student could find it more beneficial to teach and learn in person. Because distance students use an LMS such as Blackboard or Canvas for their coursework, it is unlikely that many APN students would experience discomfort with working online. Thus, the low use of online reference services may reflect a student preference for in-person consultations. Because we only surveyed nursing librarians, further research would be needed to determine if APN students perceive any barriers to online library services or prefer in-person services over those offered remotely. Previous research in nursing education indicates that the number of students citing technology as a barrier to their learning is relatively small but significant [8]. Another possible explanation for the continued high use of in-person library services is that many librarians who serve hybrid APN programs provide most of their services when the students are on campus. Some librarians may only provide inperson instruction as a component of new student orientation on campus and not later in the curriculum when students are actually working on research projects and no longer on campus. Yet another possibility is that the instruction is being scheduled on regular "on-site" days, vying for time in the day's schedule and for student attention with other activities, such as clinical skills practice and group discussions. In both of these possible scenarios, in-person librarian sessions may simply be a continuation of traditional modes of teaching. This legacy of in-person instruction could be considered a potential institutional barrier to online instruction. Whitehair addressed this challenge by stressing the importance of using orientation sessions to promote research services so that students can seek out reference assistance later. She also reported teaching research skills to DNP students at the point of need (i.e., during their capstone course) through online synchronous classes [14]. Likewise, Gebb and Young's in-person instruction session on mobile apps takes advantage of one of the few times that their APN students physically visit the campus [15]. Librarians who utilize online tools to provide instruction and develop educational materials could take advantage of opportunities to provide instruction when it is more appropriate in the curriculum as well as make that instruction more meaningful to students. Also, a transition to an online presence at a later point in the curriculum would require strong relationships with the APN nursing faculty. Future research could attempt to identify the systemic barriers to librarian involvement with their liaison units and how this could affect the transition from in-person to online instruction for APN programs. Regarding the number of "don't know" responses to the question asking about the APN programs at the respondent's institution, we note several possible explanations. One possibility is that these respondents were new librarians who had not yet fully integrated into their institutions, and therefore, they were not completely familiar with their schools' programs. Another possibility is that the librarians did not work closely enough with their schools' curricula to know which programs were offered. Some librarians might have been from understaffed library environments, serving as liaisons to several academic units, so their involvement with APN programs could have been limited. Finally, librarians who were not nursing liaisons could have attempted to take the survey and, thus, would not have been familiar enough with the APN programs at their institutions to provide adequate answers. As the survey was offered to nursing librarians in both the United States and Canada, we separated out the nine Canadian responses to assess any differences in library services for APN program services between the two countries. The only major difference involved the Canadian responses to the question about the format of the APN programs. If these nine responses were representative of Canadian nursing education, it seems that Canadian APN programs have not shifted online at the same rate as similar programs in the United States, rather they are offered more often in person at Canadian schools. However, the responses on how Canadian librarians provided reference and instruction services to their APN programs were similar to those of their US counterparts. This might indicate that despite differences in the education and health care systems between the two countries, librarians in the United States and Canada work similarly with APN students and faculty. Due to the good response rate of US nursing librarians (n=71) and decent geographic saturation of respondents, our results might serve as sufficient basis to suggest that improvements could be made in the extent of online reference and instruction services for APN programs. Due to the relatively low response rate of Canadian librarians, however, further research with a larger number of participants would be needed to better gauge the state of distance services for APN students in Canada. There are a number of limitations to this study. One is that our self-selected group of respondents is not necessarily representative of nursing librarians across North America. Our survey announcement in the email discussion lists of four Eastern MLA chapters and the name recognition of the authors might have contributed to a higher response rate from states in the Eastern and Southeastern United States, resulting in a possible overrepresentation of those states. Consistent with this possibility, there was a lower than anticipated response from Midwestern and Western states and provinces relative to the overall population and number of nursing schools in these areas. Also, because we did not ask for institutional affiliations, it is possible that more than one librarian from the same school answered the survey, creating overrepresentation of certain schools. Several "other" and "not sure" responses may also be perceived as problematic. We acknowledge that jmla.mlanet.org providing a free-text box could have prompted more descriptive responses to several survey items, such as the types of APN programs and degrees offered and additional methods of reference and instruction. Without more information, we can only speculate as to what "other" modes of reference and instruction were used by respondents. Further, due to the anonymous nature of the survey, follow-up with respondents was impossible. Future research on this subject could allow respondents to provide email addresses if they are willing to answer further questions about individual responses. The mixed results, diversity of program formats, and individual institutional circumstances indicate that we cannot recommend any single method of library reference and instruction for all nursing librarians. However, our survey was designed to be exploratory and, thus, represents an important first step in gauging the degree to which distance library services are being implemented in North American APN programs. With further research on how librarians are providing these services and which modalities are most successful, it is possible that a set of competencies could be identified for those who provide services to nursing programs in the online environment. Furthermore, a toolbox of best practices could be developed to meet these competencies, which could include evidence-based recommendations for providing quality online instruction and reference services as well as faculty outreach suggestions. Such tools would support MLA's professional competency for Instruction and Instructional Design, which encourages the use of technologies to improve instruction and communication [23]. Based on the results of this exploratory study, we are considering exploring how librarians can further improve their online research and instruction services. Two avenues under consideration are surveying APN students regarding their opinions about library services and another assessing the effectiveness of strategies used to integrate online instruction into the APN curriculum. Any next steps will be done with multiinstitutional partners to increase the sample size and strengthen the data set, which will allow conclusions that are drawn to be more generalizable. Due to the evolving educational environment for APN programs, librarians must continuously promote themselves. Adapting to the virtual environment is one opportunity for working with students who might otherwise be underserved. Effective outreach to both students and faculty is crucial. Through social media, online research guides, or promotion during in-person interactions, librarians should be proactive in marketing the services that they offer. Successful marketing should translate to increased engagement while demonstrating the value of librarians to their respective schools of nursing and institutions as a whole.
2019-10-03T09:03:21.737Z
2019-10-01T00:00:00.000
{ "year": 2019, "sha1": "4830431df6997960228a4520389a6a470303d7d5", "oa_license": "CCBY", "oa_url": "https://jmla.pitt.edu/ojs/jmla/article/download/689/910", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0f5dc3ead4fb805c9b373a8adf496c992ccb33d3", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
226244063
pes2o/s2orc
v3-fos-license
Surgical Sphincter Saving Approach and Topical Nifedipine for Chronic Anal Fissure with Hypertonic Internal Anal Sphincter The role of augmented internal anal sphincter (IAS) tone in the genesis of posterior chronic anal fissure (CAPF) is still unknown. Lateral internal sphincterotomy is the most employed surgical procedure, nevertheless it is burdened by high risk post-operative anal incontinence. The aim of our study is to evaluate results of sphincter saving procedure with post-operative pharmacological sphincterotomy for patients affected by CAPF with IAS hypertonia. We enrolled 30 patients, undergone fissurectomy and anoplasty with V-Y cutaneous flap advancement; all patients received topical administration of nifedipine 0.3% and lidocaine 1.5% ointment-based therapy before and for 15 days after surgery. The primary goal was patient’s complete healing and the evaluation of incontinence and recurrence rate; the secondary goal included the evaluation of manometry parameters, symptom relief and complications related to nifedipine and lidocaine administration. All wounds healed within 40 days after surgery. We didn’t observe any “de novo” postoperative anal incontinence case. We reported 2 cases of recurrences, healed after conservative therapy. We didn’t report any local complications related to the administration of the ointment therapy; with whom all patients reported a good compliance. Fissurectomy and anoplasty with V-Y cutaneous advancement flap and topical administration of nifedipine and lidocaine, is an effective treatment for CAPF with IAS hypertonia. proctology, fissurectomy, anoplasty, anal fissure, lidocaine, nifedipine 586 www.revistachirurgia.ro Chirurgia, 115 (5), 2020 Surgical Sphincter Saving Approach and Topical Nifedipine for Chronic Anal Fissure with Hypertonic Internal Anal Sphincter within around 4-6 weeks. The greatest disadvantage of this latter procedure is represented by the high rate of anal incontinence occurrence, which accounts for up to 30-40%. A meta-analysis from 2013 (1), which evaluated long-term incidence of anal incontinence after LIS, showed an overall continence alteration risk of 14%; however, on severity analysis, flatus incontinence and soilage/seepage were much more commoner than frank incontinence to liquid or solid stool. Anal incontinence has a strong impact on the quality of life of patients and it can be more disabling than CAF itself (2); actually, patients tend to better bear a recurrence than faecal incontinence (3), furthermore, LIS might distort plane for further anorectal surgeries that might become necessary. Finally, IAS tone has a tendency to shrink over the years, increasing the risk of late incontinence with aging. In order to preserve the anatomical and functional integrity of the sphincterial system as well as to reduce the risk of post-operative anal incontinence, the mostly used surgical procedure for the treatment of CAPF is fissurectomy alone or in association with pharmacological sphincterotomy and/or cutaneous or mucosal flapadvancement. The aim of our study is to evaluate the results of fissurectomy and anoplasty with V-Y cutaneous flap advancement in association with topical administration of ointment based nifedipine 0.3% and lidocaine 1.5% as treatment for patients affected by CAPF with IAS hypertonia. Materials and Methods We enrolled 30 patients, all affected by idiopathic and non-recurrent CAPF with hypertonic IAS, who underwent fissurectomy and anoplasty with V-Y cutaneous flap advancement and topical administration of nifedipine 0.3% and lidocaine 1.5%, from March 2015 until March 2018. None of the patients was affected by inflammatory bowel disease or underwent to previous proctology surgical procedure. All patients were followed up for at least 2 years after the surgical procedure. Informed written consent was obtained from all individuals participants included in this study. The patients’ outcome data were retrieved from a prospectively monitored database. All procedures have been approved by Ethical Committee Palermo 1 protocol number 340/20/ 20. Preoperative manometric evaluation was performed after a reasonable period after suspension of all medical therapy influencing IAS tone. The manometric evaluation was carried out by a manometric sensor (2.1 mm external diameter) with four circle orifices and a latex microbaloon at its extremity (Marquat C87; Boissy, St-Leger, France). The machine was connected to a polygraph (Narco; Byosystem MMS 200, Houston TX) using the station pull-through method with perfusion of normal saline and the patient lying in the right lateral position. At manometric evaluation, maximum resting pressure (MRP) and maximum squeeze pressure (MSP) were defined as the maximum pressure detected respectively, on resting and after voluntary contraction. Ultraslow wave activity (USWA) was defined as pressure’s waves with frequency of less than 2/min and an amplitude greater than 25 cm H2O (4). Data collected on healthy subjects by our anorectal pathophysiological laboratory showed (5) that the normal values of MRP and MSP were respectively 68.1±12.3 mmHg mmHg and 112 ± 36.2 mmHg; USWA was detected in the 10% of patients. In accordance to Jones et al (6), normal range of MRP was 45-85 mmHg; so that CAF with hypertonic IAS were defined as those with MRP values > 85 mmHg. Manometric follow up was performed at 12 and 24 months after the surgery. All patients, 30 minutes before surgery, received the topical administration inside the anal canal of 2 gr of ointment based 0.3% nifedipine and 1.5% lidocaine (Antrolin). All patients underwent fissurectomy and anoplasty with V-Y skin flap advancement lying in a gynecological position under spinal or general anesthesia. In order to expose the Chirurgia, 115 (5), 2020 www.revistachirurgia.ro 587 anal canal we used four Kocher pliers placed at 3,6,9 an 12 hours to avoid employing anal retractors; an Eisenhammer retractor or a speculum have been gently introduced just in case of necessity. After injection of 5 ml of local anesthetic solution (100 mg cloridrate mepivacaine and 0.025 mg Ladrenaline), the fibrotic edges were excised with a scalpel until normal nonfibrotic anodermal tissue showed sufficient bleeding. The sentinel skin tag and hypertrophied papilla at the level of dentate line were excised when present according to Gupta and Kalaskar (7). The tissue at the base of the fissure was curetted until there were clean muscle fibers of the IAS. There hasn’t been any use of diathermy and careful attention was payed at avoiding damages of the IAS ( ). Standard advancement anoplasty was performed using a flap of healthy skin tissue which was mobilized and then advanced with its blood supply to fill in the defect ( ). The flap was secured without tension to the anal canal and the skin was closed tension free in a V-Y manner with interrupted rapid absorbable suture behind the advancement flap. Before surgery, all patients received a small volume of phosphate-saline enema. Metronidazole was administered intravenously in a dose of 500mg 1h before surgery; subsequently, it was administered per os at the dosage of 250 mg for 7 days, three times daily. We prescribed for all patients topical administration inside the anal canal of 2 gr of ointment based 0.3% nifedipine and 1.5% lidocaine (Antrolin) twice a day for 15 days. During the first two weeks after the surgery, patients took variable doses of psyllium fibers. A laxative preparation (sennosides) was given orally to subjects who had not yet passed stools 3 days after surgery. Immediately after surgery, all patients received 100 mg of diclofenac intramuscularly for analgesia and were instructed to take only 100mg of nimesulide tablets as requires. The primary goal of the study was the patient’s complete healing and the evaluation of incontinence and recurrence rate; the secondary goal included the evaluation of MRP, MSP, USWA, symptom relief (bleeding, itching and pain) and recording pro forma of the immediate an long time complications related to flap (anal stenosis, keyhole deformity, urinary retention, related side effects of nifedipine and lidocaine). A complete healing was defined as a complete Figure 1. Intraoperative picture showing the excision of the Rănile tuturor pacienţilor s-au vindecat în decurs de 40 de zile de la intervenţie. Nu am observat niciun caz de incontinenţă anală postoperatorie "de novo". Au existat 2 cazuri de recidivă, vindecate după tratament conservator. Nu au fost raportate Introduction Chronic anal fissure (CAF) is a frequently occurring proctology lesion, which presents itself as a superficial tear in the cutaneousmucosal transition zone of the anal canal. It is characterized by ellipsoid shape with an area of approximately 0.6-1 cm 2 ; its edges are thickened and the internal anal sphincter (IAS) fibres can be seen on the bottom. At the top of the lesion we can often appreciate an hypertrophied papilla and a skin tag is present on the opposite side. CAF are more often located at the posterior anal commissure (CAPF), when it is so, a IAS hypertonia is frequently detected. To date, the role and genesis of this augmented IAS tone is still unknown, as a matter of fact, we still wonder whether it is a cause or a consequence if CAF disease. During the last few decades, the treatment of this lesions aimed to reduce IAS hypertonia with medical or surgical approaches. Lateral internal sphincterotomy (LIS) represents nowadays the strategy of choice as for the treatment of CAF associated with IAS hypertonia, refractory to conservative therapies. This surgical procedure stands apart for a low rate of post-operative complications ad a rate of healing of approximately 95%, it allows a fast resolution of clinical symptoms from the first post-operative defecation, leading to a complete healing complicaţii locale rezultate în urma administrării unguentului, iar complianţa pacienţilor la tratament a fost bună. Abstract The role of augmented internal anal sphincter (IAS) tone in the genesis of posterior chronic anal fissure (CAPF) is still unknown. Lateral internal sphincterotomy is the most employed surgical procedure, nevertheless it is burdened by high risk post-operative anal incontinence. The aim of our study is to evaluate results of sphincter saving procedure with post-operative pharmacological sphincterotomy for patients affected by CAPF with IAS hypertonia. We enrolled 30 patients, undergone fissurectomy and anoplasty with V-Y cutaneous flap advancement; all patients received topical administration of nifedipine 0.3% and lidocaine 1.5% ointment-based therapy before and for 15 days after surgery. The primary goal was patient's complete healing and the evaluation of incontinence and recurrence rate; the secondary goal included the evaluation of manometry parameters, symptom relief and complications related to nifedipine and lidocaine administration. All wounds healed within 40 days after surgery. We didn't observe any "de novo" postoperative anal incontinence case. We reported 2 cases of recurrences, healed after conservative therapy. We didn't report any local complications related to the administration of the ointment therapy; with whom all patients reported a good compliance. Fissurectomy and anoplasty with V-Y cutaneous advancement flap and topical administration of nifedipine and lidocaine, is an effective treatment for CAPF with IAS hypertonia. proctology, fissurectomy, anoplasty, anal fissure, lidocaine, nifedipine within around 4-6 weeks. The greatest disadvantage of this latter procedure is represented by the high rate of anal incontinence occurrence, which accounts for up to 30-40%. A meta-analysis from 2013 (1), which evaluated long-term incidence of anal incontinence after LIS, showed an overall continence alteration risk of 14%; however, on severity analysis, flatus incontinence and soilage/seepage were much more commoner than frank incontinence to liquid or solid stool. Anal incontinence has a strong impact on the quality of life of patients and it can be more disabling than CAF itself (2); actually, patients tend to better bear a recurrence than faecal incontinence (3), furthermore, LIS might distort plane for further anorectal surgeries that might become necessary. Finally, IAS tone has a tendency to shrink over the years, increasing the risk of late incontinence with aging. In order to preserve the anatomical and functional integrity of the sphincterial system as well as to reduce the risk of post-operative anal incontinence, the mostly used surgical procedure for the treatment of CAPF is fissurectomy alone or in association with pharmacological sphincterotomy and/or cutaneous or mucosal flapadvancement. The aim of our study is to evaluate the results of fissurectomy and anoplasty with V-Y cutaneous flap advancement in association with topical administration of ointment based nifedipine 0.3% and lidocaine 1.5% as treatment for patients affected by CAPF with IAS hypertonia. Materials and Methods We enrolled 30 patients, all affected by idiopathic and non-recurrent CAPF with hypertonic IAS, who underwent fissurectomy and anoplasty with V-Y cutaneous flap advancement and topical administration of nifedipine 0.3% and lidocaine 1.5%, from March 2015 until March 2018. None of the patients was affected by inflammatory bowel disease or underwent to previous proctology surgical procedure. All patients were followed up for at least 2 years after the surgical procedure. Informed written consent was obtained from all individuals participants included in this study. The patients' outcome data were retrieved from a prospectively monitored database. All procedures have been approved by Ethical Committee Palermo 1 protocol number 340/20/ 20. Preoperative manometric evaluation was performed after a reasonable period after suspension of all medical therapy influencing IAS tone. The manometric evaluation was carried out by a manometric sensor (2.1 mm external diameter) with four circle orifices and a latex microbaloon at its extremity (Marquat C87; Boissy, St-Leger, France). The machine was connected to a polygraph (Narco; Byosystem MMS 200, Houston TX) using the station pull-through method with perfusion of normal saline and the patient lying in the right lateral position. At manometric evaluation, maximum resting pressure (MRP) and maximum squeeze pressure (MSP) were defined as the maximum pressure detected respectively, on resting and after voluntary contraction. Ultraslow wave activity (USWA) was defined as pressure's waves with frequency of less than 2/min and an amplitude greater than 25 cm H2O (4). Data collected on healthy subjects by our anorectal pathophysiological laboratory showed (5) that the normal values of MRP and MSP were respectively 68.1±12.3 mmHg mmHg and 112 ± 36.2 mmHg; USWA was detected in the 10% of patients. In accordance to Jones et al (6), normal range of MRP was 45-85 mmHg; so that CAF with hypertonic IAS were defined as those with MRP values > 85 mmHg. Manometric follow up was performed at 12 and 24 months after the surgery. All patients, 30 minutes before surgery, received the topical administration inside the anal canal of 2 gr of ointment based 0.3% nifedipine and 1.5% lidocaine (Antrolin ® ). All patients underwent fissurectomy and anoplasty with V-Y skin flap advancement lying in a gynecological position under spinal or general anesthesia. In order to expose the anal canal we used four Kocher pliers placed at 3,6,9 an 12 hours to avoid employing anal retractors; an Eisenhammer retractor or a speculum have been gently introduced just in case of necessity. After injection of 5 ml of local anesthetic solution (100 mg cloridrate mepivacaine and 0.025 mg Ladrenaline), the fibrotic edges were excised with a scalpel until normal nonfibrotic anodermal tissue showed sufficient bleeding. The sentinel skin tag and hypertrophied papilla at the level of dentate line were excised when present according to Gupta and Kalaskar (7). The tissue at the base of the fissure was curetted until there were clean muscle fibers of the IAS. There hasn't been any use of diathermy and careful attention was payed at avoiding damages of the IAS ( ). Standard advancement anoplasty was performed using a flap of healthy skin tissue which was mobilized and then advanced with its blood supply to fill in the defect ( ). The flap was secured without tension to the anal canal and the skin was closed tension free in a V-Y manner with interrupted rapid absorbable suture behind the advancement flap. Before surgery, all patients received a small volume of phosphate-saline enema. Metronidazole was administered intravenously in a dose of 500mg 1h before surgery; subsequently, it was administered per os at the dosage of 250 mg for 7 days, three times daily. We prescribed for all patients topical administration inside the anal canal of 2 gr of ointment based 0.3% nifedipine and 1.5% lidocaine (Antrolin ® ) twice a day for 15 days. During the first two weeks after the surgery, patients took variable doses of psyllium fibers. A laxative preparation (sennosides) was given orally to subjects who had not yet passed stools 3 days after surgery. Immediately after surgery, all patients received 100 mg of diclofenac intramuscularly for analgesia and were instructed to take only 100mg of nimesulide tablets as requires. The primary goal of the study was the patient's complete healing and the evaluation of incontinence and recurrence rate; the secondary goal included the evaluation of MRP, MSP, USWA, symptom relief (bleeding, itching and pain) and recording pro forma of the immediate an long time complications related to flap (anal stenosis, keyhole deformity, urinary retention, related side effects of nifedipine and lidocaine). A complete healing was defined as a complete Both duration and intensity of pain postdefecation were evaluated. Pain intensity was scored with a visual analogical scale (VAS) from 0 to 10, where 0 corresponded to no pain and 10 to the worse pain conceivable. Anal incontinence was assessed preoperatively and after 1, 6, 12 and 24 months form surgery using the Pescatori grading system (8): A incontinence for flatus and mucus; B for liquid stool; C for solid stool; 1 for occasional; 2 for weekly and 3 for daily. Patients were discharged 24 hours after surgery, afterwards they were examined until they were completely healed and they were also followed up at 1, 6, 12 and 24 months following the surgical procedure. Independently of the scheduled appointments, patients were seen on request. Continuous variables were expressed as a mean with standard deviation and qualitative data as absolute frequencies, MRP values were also given as median and range. Student's t-test with Welch correction was used to analyze the differences of pain score and pain duration at each registration point. Values of P<0.05 were considered statistically significant. Results This study includes 10 women and 20 men. At the time of surgical procedure, the median age of the patient was 36 years (range 18-61). Demographic data of the patients object to study are reported in . Bowel function was normal in 5 patients, 20 patients suffered from constipation and 5 from diarrhea; bowel function was assessed according to the up-dated Rome IV diagnostic criteria. Five women were nulliparous, 2 gave natural birth and all of them underwent an episiotomy and 3 patients gave birth throughout a caesarean section. Clinical features of AF are reported in . All wounds healed completely within 40 days after surgery. Intensity and duration of postdefecation pain was significantly reduced with respect to the pre-operative values starting from the first defecation (p<0.01) ( ). None of the patients complained about pain, bleeding or itching 40 days after surgery. Analgesics consumption decreased significantly after first defecation (data not shown). We recorded 2 cases of pre-operatory anal incontinence (6.6%), Both cases were type A1 according to Pescatori grading system (8). In none of these patients, anal incontinence resulted worsened. We didn't observe any "de novo" post-operative anal incontinence occurrence. Pre-operative, the presence of USWA was detected in 18 out of 30 patients (60%). A comparison among healthy subjects and patients with CAPF showed a significant difference (p<0.001). At 12 months follow-up, the detected rate of USWA was lower in comparison to preoperative values (p<0.05) but they were not significantly higher as compared with healthy subjects. At 24 months follow up the USWA values of CAPF patient were not statistically significantly different form the ones of healthy subjects. There were no cases of urinary retention, anal stenosis or keyhole deformity. No necrosis of the transposed flap was observed. We didn't report any local complications related to the administration of nifedipine and lidocaine; all patients reported a good compliance with the ointment-based therapy. The only complications recorded post-operatively were of slight entity and in no case required further surgery; in particular, 1 infection were detected in the donor site and a partial break down of the flap occurred in one case. Discussion The results of our study show that fissurectomy and anoplasty associated with post-operative topical administration of nifedipine and lidocaine, as a treatment for patients affected by CAPF with IAS hypertonia, allows an early resolution of clinical symptoms as well as a fast healing of the wounds. We recorded a low rate of recurrence (6.6%); we didn't observe any "de novo" case of post-operatory anal incontinence; both patients who were already suffering didn't experienced any worsening. Moreover, MRP values start significantly reducing from 12 months after the procedure to reach, at 24 months after surgery, values which were similar to healthy subjects. Anal fissure (AF) arise from continuous microtrauma in the perianal area often induced by long term constipation or diarrhea. Intense pain caused by this injury produces IAS contraction, which in turn leads to avoiding defecatory act; this cause a progressive hardening of stools and subsequently a worsening of the proctology disease. IAS hypertonia has a strong role in the pathogenesis of CAPF. Various studies in vivo (9) and postmortem (10) revealed that posterior anal commissure is poorly perfused in comparison with other section of the anal canal, hence, IAS hypertonia might aggravate this hypoperfusion and slow down the healing process, enabling the disease to become chronic. Several researches showed that surgical procedure aiming to reduce the IAS tone allows a complete healing in most patients (11,12); other series proved instead, that the healing process of CAF is independent from the anal pressure. Pascual et al (13) confirmed that there are no statistically significant differences in manometry or endoanal ultrasound findings between healed or not healed CAF, as concerning the anal pressure. Furthermore, surgical procedures, such as fissurectomy and anoplasty with flap advancement may lead to the resolution of CAF without interfering with IAS tone (14)(15)(16)(17), furthermore, years after this procedure we can observe a normalisation of the IAS tone, which occurs concurrently with CAF healing (18). Under the light of this latter considerations, we find reasonable to treat CAPF with IAS hypertonia employing a surgical procedure which allows to preserve the integrity of the IAS, throughout the association with pharmacological sphincterotomy aiming to improve anal canal blood perfusion. Fissurectomy is the most common surgical procedure used in order to preserve structural and functional integrity of the IAS. Fissurectomy, as a wound debridement, removes the bradytrophic scar tissue and produces fresh wound edges, creating an acute fissure. This latter surgical procedure has been associated with pharmacological sphincterotomy in order to improve its results, as well as reduce its complications (19)(20)(21)(22)(23)(24). After surgical fissurectomy, with or without association with chemical sphincterotomy (25-32), we observe a complete second intention wound healing, even after 10 weeks and the rate of failure con reach the 34% (30). The rate of recurrence reaches up to the 37% in some series (28). The high rate of recurrence and healing failure of fissurectomy might be related to the fact that this surgical procedure leaves a naked ischemic area, whose previously blood supply arrives from some branches of rectal inferior arteries, which cross the hypertonic IAS fibres (9,10,33); in this regard the employ of drugs enabling to reduce the IAS tone aims to improve the blood supply of the naked area. The use a flap to cover up for the naked area Chirurgia, 115 (5), 2020 www.revistachirurgia.ro after fissurectomy is designed to relocate on this area healthy and fresh blood supplied tissue, perfused by other arterial districts. Another possible advantage of using a flap might be represented by the enlarging effect on the cutaneous circumference of the anal canal, which reduces the risk of splitting. Several surgical techniques for the use of flaps have been described, we number among them the employ of skin or mucous flaps, skin ones are most frequently hired. Various type of skin flaps are known, such as sliding skin grafts, house advancement flap, V-Y advancement anoplasty, island advancement flap and rotation flaps (15,(34)(35)(36)(37)(38). Surgical procedures that involve the employment of flap guarantee shorter healing period and lower incidence of non-healed wound or recurrence and a minimal interference with the anal continence than the ones observed after surgical fissurectomy itself with or without association with pharmacological sphincterotomy (16,17,39,40). Fissurectomy associated with anoplasty is a surgical procedure employed for both patients with hypertonic IAS and normotonic IAS (38,41). Under the light of the brilliant results of this surgical procedure, some authors suggest using fissurectomy with anal advancement flap as the first line therapy for CAF (15,17,42,43). At the best of our knowledge, the association of fissurectomy with anoplasty and pharmacological sphincterotomy has been performed only by using botulinum toxin, as reported in few works (41,(44)(45)(46). The benefits of nifedipine might be due also to its anti-inflammatory action not only to its induced reduction of the IAS tone (47)(48)(49)(50)(51)(52), carried through the inhibition of calcium flow into the sarcoplasm of IAS. Experimental studies indicate that nifedipine has a modulating effect on microcirculation (53), as well as a local anti-inflammatory effect (54). Moreover, nifedipine might contribute to the healing process of CAF due to its additional free radical scavenging properties (55). In our study we didn't observe any adverse effects related to the employ of nifedipine such as headache or perianal dermatitis, patients' compliance has been excellent. The only surgical related complications we were able to record were of slight entity and they never required further surgery. Furthermore, we recorded 2 cases of recurrence, which have not occurred at the same site as the original lesion and all healed with medical therapy; this might be because of the durability of the advancement flap (15). Conclusion Our preliminary work shows that fissurectomy and anoplasty with V-Y cutaneous advancement flap, associated with topical administration of nifedipine and lidocaine, was highly effective as a treatment for CAPF with IAS hypertonia; this approach stands apart for low recurrence rate and no incidence of "de novo" case of post-operative anal incontinence nor adverse effect. Nevertheless, we must underline the necessity of further randomized trial comparing fissurectomy alone versus fissurectomy combined with graft, with or without pharmacological sphincterotomy, in order to better define the role of those latter approaches as a treatment for CAPF with IAS hypertonia. No financial support has been obtained in the preparation of this study. The Authors report no conflict of interest in this work. The Authors declare that they have no competing interests. participated in the sequence alignment. Di Vita G., Sciumé C. and Martorana G. revised and approved the final manuscript and its conclusions. Informed written consent was obtained from all individuals participants included in this study. All procedures performed in studies involving human participants were in accordance with the ethical standards of the 1964 Helsinki Declaration and its later amendments or comparable ethical standards. Our institution's ethical committee did not deem it necessary to explicit its approval for this study. The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.
2020-11-04T14:08:24.384Z
2020-09-01T00:00:00.000
{ "year": 2020, "sha1": "d6a11db8a7fc0f65842c51ba9307369d8d766990", "oa_license": null, "oa_url": "https://doi.org/10.21614/chirurgia.115.5.585", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "591af1af5214ac8f2297a17e77a05eb8fa0e3c98", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
90897419
pes2o/s2orc
v3-fos-license
Phylogenomic analyses resolve historically contentious relationships within the Palaeognathae in the presence of an empirical anomaly zone Palaeognathae represent one of the two basal lineages in modern birds, and comprise the volant (flighted) tinamous and the flightless ratites. Resolving palaeognath phylogenetic relationships has historically proved difficult, and short internal branches separating major palaeognath lineages in previous molecular phylogenies suggest that extensive incomplete lineage sorting (ILS) might have accompanied a rapid ancient divergence. Here, we investigate palaeognath relationships using genome-wide data sets of three types of noncoding nuclear markers, together totalling 20,850 loci and over 41 million base pairs of aligned sequence data. We recover a fully resolved topology placing rheas as the sister to kiwi and emu + cassowary that is congruent across marker types for two species tree methods (MP-EST and ASTRAL-II). This topology is corroborated by patterns of insertions for 4,274 CR1 retroelements identified from multi-species whole genome screening, and is robustly supported by phylogenomic subsampling analyses, with MP-EST demonstrating particularly consistent performance across subsampling replicates as compared to ASTRAL. In contrast, analyses of concatenated data supermatrices recover rheas as the sister to all other non-ostrich palaeognaths, an alternative that lacks retroelement support and shows inconsistent behavior under subsampling approaches. While statistically supporting the species tree topology, conflicting patterns of retroelement insertions also occur and imply high amounts of ILS across short successive internal branches, consistent with observed patterns of gene tree heterogeneity. Coalescent simulations indicate that the majority of observed topological incongruence among gene trees is consistent with coalescent variation rather than arising from gene tree estimation error alone, and estimated branch lengths for short successive internodes in the inferred species tree fall within the theoretical range encompassing the anomaly zone. Distributions of empirical gene trees confirm that the most common gene tree topology for each marker type differs from the species tree, signifying the existence of an empirical anomaly zone in palaeognaths. 53 The scaling-up of multigene phylogenetic data sets that accompanied rapid advances in 54 DNA sequencing technologies over the past two decades was at first heralded as a possible end 55 to the incongruence resulting from stochastic error associated with single-gene topologies (Rokas 56 et al. 2003, Gee 2003. However, it soon became clear that conflicting, but highly supported, 57 topologies could result from different data sets when sequence from multiple genes was analyzed 58 as a concatenated supermatrix, leading Jeffroy et al. (2006) to comment that phylogenomics 59 recently coming to signify the application of phylogenetic principles to genome-scale data 60 (Delsuc et al. 2005) could instead signal the beginning of incongruence . On the one hand, 61 these observations highlighted the need for more sophisticated models to account for 62 nonphylogenetic signal such as convergent base composition or unequal rates that can become 63 amplified in large concatenated data sets (Jeffroy et al. 2006). But at the same time, there was a Warnow 2015) to further strengthen support for this topology relative to that obtained from 160 concatenated sequence data. Additionally, we employ phylogenomic subsampling to investigate 161 certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. The copyright holder for this preprint (which was not this version posted February 9, 2018. . https://doi.org/10.1101/262949 doi: bioRxiv preprint 8 consistency in the underlying signal for conflicting relationships recovered from species tree 162 versus concatenation approaches, and use likelihood evaluation and coalescent simulation to 163 assess the underlying gene tree support for the recovered species tree topology and the existence 164 of an empirical anomaly zone in palaeognaths. Throughout, we consider the variation in signal 165 among classes of noncoding nuclear markers that are becoming increasingly adopted for . Table S1). We chose to analyze noncoding sequences primarily because coding regions 176 across large taxonomic scales in birds are known to experience more severe among-lineage 177 variation in base composition than noncoding regions, which can complicate phylogenetic 196 Candidate introns were identified using BEDTools to output coordinates for annotated 197 intron features in the galGal4 genome release that did not overlap with any annotated exon 198 feature. Chicken coordinates for these introns were lifted over to target palaeognaths in the 199 whole-genome alignment as described for CNEEs above, filtered to remove duplicated regions, pairwise sequence identity of 70% and fewer than 0.5 undetermined sites (e.g. gaps and Ns) per 207 certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. in the whole-genome alignment was allowed, resulting in a data set of 3,158 UCEs. 218 Blastn searches with NCBI s default somewhat similar parameters (evalue 1e-10, 219 perc_identity 10, penalty -3, reward 2, gapopen 5, gapextend 2, word size 11) were used with 220 query sequence from each of the three kiwi species included in the whole-genome alignment to 221 identify orthologous regions in the North Island brown kiwi (Apteryx mantelli, Le Duc et al. 222 2015), which was not included in the WGA. North Island brown kiwi sequence was added for 223 loci that had consistent top hits across blastn searches, with a single high-scoring segment pair 224 (HSP) covering at least 50% of the input query sequence and minimum 80% sequence identity. 227 Sequence was also added from a reference-based genome assembly of the extinct little 228 bush moa (Anomalopteryx didiformis, Cloutier et al. 2018) that was generated by mapping moa 229 reads to the emu genome included in the whole-genome alignment. Emu coordinates from the 230 certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. beginning with different random number seeds and with ten independent tree searches within 248 each run. The species tree topology was inferred using best maximum likelihood gene trees as 249 input, and node support was estimated from MP-EST runs using gene tree bootstrap replicates as 250 input. 251 ASTRAL-II v. 4.10.9 (Mirarab and Warnow 2015, hereafter ASTRAL ) was also run 252 using the best maximum likelihood gene trees to infer the species tree topology, and the 500 253 certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. 266 For all three methods, bootstrap supports were placed on the inferred species tree using 267 RAxML, and trees were outgroup rooted with chicken using ETE v. 3 (Huerta-Cepas et al. UCEs), loci were randomly sampled with replacement to create subsets of 50, 100, 200, 300, 274 400, 500, 1000, 1500, 2000, 2500, and 3000 loci. This process was repeated ten times to create a 275 total of 110 data sets per marker type (e.g. 10 replicates of 50 loci each, 10 replicates of 100 loci, 276 certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. The copyright holder for this preprint (which was not this version posted February 9, 2018. . https://doi.org/10.1101/262949 doi: bioRxiv preprint 13 etc. for each of CNEEs, introns, and UCEs). Topologies were inferred for bootstrap replicates of 277 each data set using MP-EST, ASTRAL, and ExaML as described above. However, for reasons 278 of computational tractability, 200 rather than 500 bootstrap replicates were used for each method 279 (including ExaML), and MP-EST was run once (rather than three times) for each data set, 280 although still with ten tree searches within each run. 281 Support for alternative hypotheses regarding the sister group to the rheas, and the sister to 282 emu + cassowary, was estimated by first counting the number of bootstraps that recovered each 283 alternative topology from the 200 bootstraps run for each replicate, and then calculating the 284 mean value for each hypothesis across the ten replicates within each data set size category. 286 Analyses of gene tree heterogeneity were conducted using both the best maximum 303 Relative support for each gene tree topology was assessed by computing AIC from the 304 log-likelihood score (lnL) of the inferred gene tree and lnL when the input alignment for each 309 For each locus, we tested the estimated gene tree topology against an a priori candidate set of 310 probable trees that enforced monophyly of the five higher-level palaeognath lineages (kiwi, emu 311 + cassowary, rheas, moa + tinamous, ostrich), but allowed all possible rearrangements among 312 those lineages (for a total of 105 trees in the candidate set, plus the gene tree topology itself if it 313 did not occur within this set). We also tested gene trees against a second set of candidate 314 topologies using the same criteria as above, but additionally allowing all possible rearrangements 315 within a monophyletic tinamou clade (for 1,575 candidate topologies). For each gene, in 316 addition to the reported P-value for the fit of the species tree topology, we also calculated the 317 rank of the species tree topology relative to all tested candidates from P-values sorted in 318 ascending order. 319 The proportion of observed gene tree heterogeneity consistent with coalescent variation 320 was estimated through simulation. For each marker type, we estimated ultrametric species tree 321 branch lengths in mutational units (µT, where µ is the mutation rate per generation and T is the 322 certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. The copyright holder for this preprint (which was not this version posted February 9, 2018. 391 certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. The copyright holder for this preprint (which was not this version posted February 9, 2018. . https://doi.org/10.1101/262949 doi: bioRxiv preprint Coalescent species tree methods, but not concatenation, recover congruent palaeognath 392 relationships 393 Using genome-wide data sets of 12,676 CNEEs, 5,016 introns, and 3,158 UCEs, we 394 recover fully congruent topologies across all marker types and for the combined total evidence 395 tree using MP-EST and ASTRAL coalescent species tree approaches (Fig. 1, Suppl. Fig. S1a- sister group relationship to kiwi as is seen for MP-EST and ASTRAL, but with casuariiforms 407 instead placed as the sister to moa + tinamous with 60% support for CNEEs and 96% support for 408 UCEs (Suppl. Fig. S1). 409 Robustness of the underlying signal for these inconsistently recovered relationships was analyses rapidly accumulate support for a sister-group relationship between rheas and 414 certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. Fig. 2a-c, Fig. 3a,j), and with support for 416 alternative hypotheses sharply dropping off for replicates with greater than 200 loci. Support 417 accumulates more slowly for ASTRAL, but the hypothesis of rheas as sister to emu/cassowary + 418 kiwi clearly dominates and support for alternatives declines in replicates with more than 1000 419 loci for all markers (Fig. 2d-f, Fig. 3). In contrast, subsampling replicates are less consistent for 420 relationships inferred under concatenation with ExaML ( Fig. 2g-i, Fig. 3). In particular, CNEEs 421 oscillate between recovering rheas as the sister to moa + tinamous, or the sister to all other non-422 ostrich palaeognaths, although always with low bootstrap support (Fig. 2g, Fig. 3g). The other 423 two marker types more clearly support the topology recovered from full data sets with ExaML 424 that place rheas as sister to the remaining non-ostrich palaeognaths, although bootstrap support 425 for UCE replicates is generally weak (Fig. 2h,i; Fig. 3). 426 Subsampling provides even more robust support for emu + cassowary as the sister to 427 kiwi, with both MP-EST and ASTRAL quickly accumulating support for this clade and with 428 rapidly declining support for all other hypotheses (Suppl. Fig. S2a-f, Suppl. Fig. S3). ExaML 429 intron replicates also steadily accumulate support for this relationship with an increasing number 430 of loci (Suppl. Fig. S2h, Suppl. Fig. S3h,q). The alternative hypothesis of emu + cassowary as 431 sister to moa + tinamous, which is favored by CNEEs and UCEs analyzed within a concatenation 432 framework, is not well supported by subsampling, where conflicting topologies characterized by 433 low support are recovered across ExaML replicates (Suppl. Fig. S2g,i; Suppl. Fig. S3). 435 Patterns of CR1 retroelement insertions corroborate both the inferred species tree 436 topology from MP-EST and ASTRAL and the existence of substantial conflicting signal 437 certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. The copyright holder for this preprint (which was not this version posted February 9, 2018. . https://doi.org/10.1101/262949 doi: bioRxiv preprint consistent with incomplete lineage sorting across short internal branches. In total, 4,301 438 informative CR1 insertions were identified from multispecies genome-wide screens, the vast 439 majority of which (4,274 of 4,301, or 99.4%) are entirely consistent with relationships inferred 440 from sequence-based analyses ( Fig. 4a; analysis here was restricted to species in the whole-441 genome alignment, and little bush moa and North Island brown kiwi are therefore not included). 442 Not surprisingly, we identify many more insertion events occurring along shallower branches 443 with longer estimated lengths and fewer insertions along shorter branches that form the backbone 444 of the inferred species tree (refer to Fig. 1 for estimated branch lengths). 445 Of the 27 (0.6%) CR1s that are inconsistent with the species tree topology, two conflict 446 with the inferred relationships within kiwi, and nine contradict relationships among tinamous 447 (Fig. 4b,c). However, in each case, conflicting CR1s are far outweighed by markers that support 478 Distributions of estimated gene tree topologies illustrate that the most common topology 479 for each marker type is not the species tree topology inferred by MP-EST and ASTRAL, thereby 480 suggesting the existence of an empirical anomaly zone (Fig. 5a-c). While the ranking of specific 481 gene tree topologies differs across marker types, common to these anomalous gene trees that 482 occur at higher frequency than the species tree topology is the fact that both the shallowest clades 483 certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. The copyright holder for this preprint (which was not this version posted February 9, 2018. . https://doi.org/10.1101/262949 doi: bioRxiv preprint as well as the deepest split between the ostrich and all other palaeognaths are maintained 484 throughout (Fig. 5d). Rearrangements of AGTs relative to the MP-EST and ASTRAL species 485 tree topology instead involve the two short internal branches forming the common ancestor of 486 emu and cassowary with kiwi, and with this clade to rheas (Fig. 5d). 487 To more fully investigate the observed gene tree heterogeneity, we considered all with rheas means that the emu/cassowary + kiwi + rhea clade is actually recovered less often 499 than alternatives. 500 We next considered whether topological differences between estimated gene trees and the 501 species tree are well supported, or are instead likely to primarily reflect gene tree estimation 502 error. Mean bootstrap support for estimated gene trees is relatively high, especially for introns 503 and UCEs (83.9% and 82.8% respectively, Fig. 7a-c). However, average support falls by about 504 10% for each marker type when gene tree bootstrap replicates are constrained to the species tree 505 topology, with P < 0.0001 for paired t-tests of each data set. These results suggest that 506 certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. The copyright holder for this preprint (which was not this version posted February 9, 2018. . https://doi.org/10.1101/262949 doi: bioRxiv preprint differences from the species tree are broadly supported by variation in sequence alignments for 507 individual loci. To test this further, we compared the difference in Akaike information criterion 508 (AIC) for estimated gene trees to the AIC obtained when the sequence alignment for each gene 509 was constrained to the species tree topology. Approximately 80% of CNEEs have AIC (gene 510 tree-species tree) less than -2, indicating substantial support in favor of the gene tree topology 511 relative to that of the species tree (Burnham and Anderson 2002), while the proportion was even 512 greater for introns and UCEs (approximately 90% with AIC < -2, Fig. 7d). Despite this result, 513 approximately unbiased (AU) tests typically failed to reject a hypothesis that the data fit the 514 species tree topology, with only about 20% of introns and UCEs, and 30% of CNEEs rejecting 515 the species tree topology at P < 0.05 (Fig. 7d). However, the species tree topology is also not 516 commonly amongst the top 5% of candidate alternative topologies when these alternatives are 517 ranked according to increasing AU test P-value within each locus (Fig. 7d). 518 In keeping with the results for all loci, gene tree topologies are also generally supported 519 for loci falling within AGT groups. Support for individual gene trees is somewhat weak for 520 CNEEs, with low median bootstrap support and few substitutions occurring along branches that 521 conflict with the species tree topology (Suppl. Fig. S4a,d), which is consistent with the shorter 522 average alignment length and lower variability of these loci (Suppl. Fig. S5). However, support 523 is much stronger for introns and UCEs, with most loci having bootstrap support above 50% for 524 conflicting clades and AIC < -2 indicating much stronger likelihood support for the recovered 525 gene tree topology relative to that obtained if sequence alignments are constrained to match the 526 inferred species tree (Suppl. Fig. S4). 527 Simulations were used to further assess what proportion of total gene tree heterogeneity 528 is likely attributable to coalescent processes rather than to gene tree estimation error (Fig. 8). 529 certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. The copyright holder for this preprint (which was not this version posted February 9, 2018. . https://doi.org/10.1101/262949 doi: bioRxiv preprint Using either Robinson-Foulds (RF) distances or the matching cluster distance, which is 530 influenced less by the displacement of a single taxon than is the RF metric (Bogdanowicz et al. 532 ASTRAL for each marker type, we find that coalescent processes alone can account for more 533 than 70% of the observed gene tree heterogeneity in most comparisons, and >90% for introns 534 when gene trees are simulated from MP-EST coalescent branch lengths (Fig. 8). 581 In parallel, there has been a renewed focus on palaeognath phylogeny throughout the past 590 We recover a topology placing rheas as sister to emu/cassowary + kiwi that is congruent 591 across all analyses for MP-EST and ASTRAL species tree methods, but that differs from the 592 placement of rheas as sister to the remaining non-ostrich palaeognaths when concatenated data 593 are analyzed. These conflicting topologies are recovered with maximal support in at least some 594 data sets for both coalescent and concatenation analyses. However, subsampling approaches that 596 Edwards 2016b) provide more robust support for the coalescent species tree, and this topology is 597 further corroborated by patterns of CR1 retroelement insertions from multiway genome-wide 598 certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. The copyright holder for this preprint (which was not this version posted February 9, 2018. . https://doi.org/10.1101/262949 doi: bioRxiv preprint screening. However, conflicting CR1 insertions suggest extensive ILS across short internal 599 branches separating major palaeognath lineages, and coalescent lengths for these pairs of 600 branches fall within the theoretical range expected to produce anomalous gene trees. We indeed 601 find that the most common gene tree for each marker type does not match the inferred species 602 tree topology, consistent with an empirical anomaly zone in palaeognaths. 603 Although we contrast results from concatenation and coalescent species tree methods, we constrained to the species tree topology and clades that conflict with the species tree tend to have 703 at least 50% median bootstrap support, suggesting that gene tree heterogeneity is real rather than 704 reflecting gene tree estimation error alone. However, we concur that short internodes pose 705 substantial challenges to accurate gene tree inference, and investigations of the empirical 706 anomaly zone should greatly benefit from algorithms that make 'single-step' coestimation of 707 gene trees and species trees scalable to phylogenomic data sets. 708 In conclusion, we find strong evidence that past difficulty in resolving some palaeognath 709 relationships is likely attributable to extensive incomplete lineage sorting within this group, and 710 that species tree methods accommodating gene tree heterogeneity produce robustly supported 711 topologies despite what appears to be an empirical anomaly zone. We echo the sentiments of 712 other authors that high bootstrap support alone is an inadequate measure of confidence in 713 certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. The copyright holder for this preprint (which was not this version posted February 9, 2018. . https://doi.org/10.1101/262949 doi: bioRxiv preprint inferred species trees given increasingly large phylogenomic data sets (Edwards 2016b certainly not a new idea in phylogenetics, but an increasing emphasis on its importance in the era 720 of species trees will continue to advance the field beyond reports of highly supported, but often 721 discordant, 'total evidence' topologies toward a more nuanced 'sum of evidence' approach that 722 considers not just which topologies are produced but also what they can tell us about the 723 underlying evolutionary processes and our attempts to model them. Table S2). The copyright holder for this preprint (which was not this version posted February 9, 2018. . https://doi.org/10.1101/262949 doi: bioRxiv preprint
2022-06-01T04:04:05.734Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "712c04dbe085d577ddc4d3d2c37ecdfd8d01be34", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/sysbio/article-pdf/68/6/937/30795608/syz019.pdf", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "712c04dbe085d577ddc4d3d2c37ecdfd8d01be34", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [] }
219658602
pes2o/s2orc
v3-fos-license
Three patterns of chronic cerebrospinal venous insufficiency in Ménière syndrome patients: Diagnosis and treatment options Identification techniques of the three different chronic cerebrospinal venous insufficiency patterns and related treatment options are in an initial phase of evaluation and analysis. Our purpose is to describe the appropriate management, proposing a tailored approach for each one. We identified three different Ménière syndrome patients in our Audiology Department, diagnosing the corresponding chronic cerebrospinal venous insufficiency pattern by Echo-color Doppler ultrasound evaluation and treating by venous angioplasty or rehabilitative treatment according to the internal jugular and vertebral veins anomalies found in each patient. According to the pattern, after specific treatment, echo-color-Doppler control analysis revealed a normalized venous outflow correlated to Ménière symptoms reduction and/or progressive disappearance during one year follow up. An adequate analysis of venous cerebral and ear outflow and a tailored treatment may represent an effective option when chronic cerebrospinal venous insufficiency is correctly diagnosed. Introduction Cerebrospinal nervous system drainage was initially analyzed by Zamboni, identifying the possible anomalies of extracranial venous system responsible of chronic cerebrospinal venous insufficiency (CCSVI), 1 classified among truncular venous malformations (TVM). 2 These malformations occur during fetal life, causing by an altered development of the vascular trunk, with the consequence of possible venous anomalies as hypo or hyperplasia of the venous system, as well as intraluminal defects as fibrosis, septum and/or incomplete valve. The innovation of these studies and the following literature of the last years is the fact that, before them, venous anomalies were considered as simple and irrelevant anatomical variants, without a clinical significance, because of: i) limited anatomical and physiological knowledge; ii) extreme interpersonal variability; iii) lack of codified parameters that could allow to identify and classify these venous anomalies. The consequence was that the limited and never deepened understanding of the anatomy variants and pathophysiology mechanisms of extracranial venous system may consequently underestimate the impact and importance of cerebral and ear venous drainage abnormalities as well as their role in a variety of central nervous and ear system disorders. However CCSVI is currently well know, identifying three different patterns, according to the presence of vascular defects and/or extrinsic obstacle as muscle compression, 3 that we have identified in three distinct patients afferent to the Audiology Department of our hospital. Ethics board approval and patient publication consent form was obtained for this case series. Case #1 A 65-years-old woman was admitted to our Audiology Department for progressive hearing loss, (mainly in the right ear) associated to continuous and fastidious tinnitus and dizziness episodes with increased incidence in the last 15 year (weekly basis). Previous medical treatments used did not give effective and durable benefits to her condition. Cerebral computed tomography and magnetic resonance (MR) imaging scans were negative, while an audiometric exam confirmed a chronic right sensorineural hearing loss. Cerebrospinal outflow Echo-color Doppler (ECD) ultrasound evaluation of vertebral veins (VV) and internal jugular veins (IJV) both in supine position and upright was performed to assess the hemodynamic criteria of CCSVI, revealing a complete septum in the junction of right IJV and subclavian vein (the first part of J1 segment) with consequent outflow acceleration ( Figure 1A), while left IJV revealed an hypoplasia in all the three assessed segments. These findings were confirmed by selective venography of both IJVs ( Figure 1B), while analysis of Azygos vein (AV) reveled the absence of anomalies, so we proceeded to a balloon dilation of right IJV (16-mm-diameter and 40-mmlong balloon for J1 right segment) using a MAXI LD dilation catheter (Cordis, Johnson & Johnson, Miami, Florida). Final venography ( Figure 1C) and one month ECD analysis confirmed the improvement of cerebral outflow; moreover a progressive resolution of tinnitus and dizziness episodes as well as an audiometric examination improvement during one year follow up ( Figure 1D). Case #2 A 54-years-old woman with fastidious and continuous bilateral tinnitus associated to auricular fullness (mainly in the left ear) and monthly episodes of dizziness related sometimes to autonomic phenomena (nausea and vomiting), was subjected to an audiometric examination (quite normal) as well as cerebral computed tomography and MR imaging scans (negative). Cerebrospinal outflow ECD ultrasound evaluation of the VV and IJV both in supine position and upright revealed a partial muscular compression of right IJV and a complete muscular compression of left IJV in J2 and J3 segments ( Figure 2A) in both positions, in almost complete resolution during Valsalva maneuver, revealing the real caliber of IJV and the absence of vascular anomalies with a normal outflow, denoting a type 2 of CCSVI. We decided to direct the patient to a rehabilitative treatment for muscular compression of sternocleidomastoid (SCM) and omohyoid (OM) muscles, with the goal of a complete muscle relaxation and a restoration of physiological IJV caliber and venous flow. After a complete cycle of rehabilitative treatment, for the first time used for this CCSVI type by our team, 4 ECD analysis observing a significant basal reduction of muscle compression and a normalized outflow through IJVs ( Figure 2B) with a feeling of comfort for the patient due to tinnitus progressive attenuation and auricular fullness disappearance. Case #3 A 28-years-old-woman with a Ménière syndrome (MéS) diagnosed three years prior (auricular fullness associated to tinnitus and frequent dizziness episodes, sometimes on a daily basis) was admitted to our Audiology Department. Audiometric examination was normal and routine exam of cerebral computed tomography and magnetic resonance imaging scans were negative. Cerebrospinal outflow ECD ultra-sound evaluation in both supine position and upright revealed a significant muscle compression of both IJV in J2 and J3 segments ( Figure 3A) and vascular anomalies described as criteria 1, 3 and 4 ( Figure 3B) according to Zamboni classification. 1 The IJV showed an annular constriction in J1 due to a rigid and immobile septum, so the association of muscular and vascular anomalies identified a type III CCSVI. After a complete cycle of rehabilitative treatment for sternocleidomastoid and omohyoid muscles relaxation, confirmed by ECD analysis that revealed a normalized J2 and J3 caliber and disappearance of muscle compression, we performed a selective venography of IJV ( Figure 3C) and a balloon dilation (14-mm-diameter and 40-mmlong balloon for both IJV treatment) using a MAXI LD dilation catheter. During the phlebography was investigated the AV too, but no anomalies were revealed. Final venography ( Figure 3D) and one month ECD assessment confirmed the normalized cerebral outflow. All three patients received one year follow up ECD analysis, confirming the persistent normalized cerebral outflow through the VV and IJV. Discussion In recent years, many authors tried to answer the question coming from Zamboni's initial study about CCSVI, how these extra-cranial venous anomalies may have pathologic consequences in the brain and ear physiology and which role in the common etiopathogenetic mechanism of different pathologies: initially associated exclusively to Multiple sclerosis (MS) and subsequently Parkinson or Alzheimer disease, MéS and migraine, 5 as well as transient global amnesia, or retinal abnormalities 6 leading to the creation of a position paper with standardized noninvasive and invasive imaging protocols for evaluating extra-cranial venous abnormalities indicative of CCSVI 7 by the use of ECD images to observe the vascular anomalies both structural and morphological as well as hemodynamic and when feasible the use of MR images with intravenous contrast agent allowing the evaluation of venous outflow abnormalities. The key point is that these studies give venous system right dignity, because until now, it was been in the shadows of arterial system, in terms of considerations and insights in literature and clinical practice. Moreover, the correct hunch was that, if cerebral and ear venous system is altered, it may have pathological impact as happen in other districts as lower limb, especially in a complicated district as head and neck. The information and available data confirm that a correct ECD analysis by expert operators and an adequate treatment of extra-cranial venous anomalies may represent an effective option in many setting of patients, because of when CCSVI is present, in these different pathologies, it seems to participate to the appearance and/or increase lesions in the white matter and/or predisposing to the associated disorder and symptoms, altering the cerebrospinal fluid (CSF) dynamics and brain perfusion, as well as by promoting an excess of endolymphatic volume responsible of endolymphatic hydrops in Ménière patients 8 with a specific and organ-related anomalies, because of there is a possible disease specific typology of abnormalities. 9 Bavera et al. assessed as CCSVI may be the common mechanism in different pathologies as MS and MéS, demonstrating that not only there was a high prevalence of these anomalies in these two populations compared to a control population, but also as a predisposing factor that may promote them showing different patterns in the two pathologies. 10 Their work underlines that neurological and ear pathologies so different as MS and MéS, may have a common element as CCSVI, taking a different form according to the pathology: anomalies mainly related to the J1 and J2 segments in MS patients, while J2-J3 segments anomalies in MéS, as well as jugular bulb anomalies and mediolateral or anteroposterior position that may determining encroachment of these structures and components. Mandolesi et al. reported a classification of CCSVI in three different patterns: 3 type 1 CCSVI due to an endovascular obstacle, called hydraulic, type 2 CCSVI due to a muscular compression without endovascular anomalies, called mechanical and type 3 CCSVI presenting both endovascular and extravascular anomalies, called mixed. We propose a flow chart which may allow ( Figure 4) different treatment options according to the CCSVI patterns: selective venography of IJV and dilatation treatment in case of endovascular anomalies (type 1) as described before by our team 11 and other hospital protocols as a safe and effective treatment in this form of anomalies 12 and an adequate rehabilitation for SCM and OM muscles relaxation in case of mechanical extravascular obstruction (type 2) as described before by our team, first description of a conservative treatment for this anomaly as alternative to muscular resection 4 and a composed treatment in case of mixed pattern (type 3), according to ECD images and where feasible MR images (Table 1). In fact, in CCSVI due to exclusively muscular entrapment of IJV (pattern 2) and/or in association with vascular anomalies (pattern 3), until recently literature described surgical resection of OM and SCM muscles as the only treatment option. 13,14 IJV entrapment due to muscular compression was related not only to ear disorder as MéS, but recently De Bonis et al. have indicated the possible association with high-pressure hydrocephalus, and enlarged ventricles, calling this syndrome JEDI (jugular entrapment dilated ventricles intracranial hypertension). In this case, they describe as the jugular decompression has allowed a significant reduction of intracranial pressure. 15 Zamboni and his team has showed significant data about pattern 3 CCSVI treatment, associating in the endophlebectomy treatment the removal of the defective valves with the muscle resection, achieving so an unique combined surgical treatment to solve both the anomalies (muscular and vascular) and allowing a restored cerebral ad ear outflow. 16 Endophlebectomy is considered an alternative option to IJV angioplasty, when vascular anomalies (intraluminal defects as immobile valves, septum, inverted valve orientation) lend themselves to recidivism and muscular resection was considered the only possible muscular treatment, while recently our team has proposed with success a rehabilitative treatment to solve the muscle compression of IJV. Here we describe the three different pattern of CCSVI, proposing the indication to follow for each pattern in terms of tailored treatment approach, as indicated in our flow chart (Figure 4). Surgical option represent the second choice in pattern 2 and 3, if the conservative treatment is not effective. On the other hand, quite recently, different venographic patterns has been used to give tailored indication to venoplasty in case of CCSVI associated to multiple sclerosis. 17 Procedure The patient is invited to position himself in clinostatism, to hyperextend the neck and to turn the head to the opposite side to the muscle to be treated (at first one side and then on the other one). We gently follow with the fingertips the anatomical position of the muscles starting from the mastoid process up to the insertion under the sternal or clavicular bar of the involved muscles along their entire length, with a sliding of them, pincer compressions, hyperextension, until to get a complete relaxation. After that, the patient is invited to position himself in ortostatism and we repeat the same treatment to obtain the same result in this position. Involved muscles Manual decontracting path is aimed at stretching every single muscle in the neck region. Particular attention must be paid to the sternocleidomastoid, the omohyoid, the styloid, the mylohyoid, infrahyoid, suprahyoid and digastrics muscles. The previous ECD analysis clearly gives information about the muscle involved in the IJV compression, the site (J1 and/or J2 and/or J3) and if the treatment should be bilateral or not Duration Each treatment lasts about 60 minutes for a total of 12/15 sessions. Follow up At the end of the established sessions, we evaluate symptoms, ECD analysis (to investigate the resolution or not of IJV muscle compression) and audiology examination, deciding if repeat again the treatment (if the results are not satisfactory) or recommending the patient a course of maintenance therapy every 30 days (if the results are satisfactory) in order to maintain the achieved benefits.
2020-05-21T09:17:39.570Z
2020-04-07T00:00:00.000
{ "year": 2020, "sha1": "6c2b1793128aef7d0ef9abcbc6bb1da3f1468fb4", "oa_license": "CCBYNC", "oa_url": "https://www.pagepressjournals.org/index.php/vl/article/download/8758/8675", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "de18adefff24041248fbffca5ea29b48c74fc0e4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
5751824
pes2o/s2orc
v3-fos-license
Adenovirus E3-19K Proteins of Different Serotypes and Subgroups Have Similar, Yet Distinct, Immunomodulatory Functions toward Major Histocompatibility Class I Molecules* Our understanding of the mechanism by which the E3-19K protein from adenovirus (Ad) targets major histocompatibility complex (MHC) class I molecules for retention in the endoplasmic reticulum is derived largely from studies of Ad serotype 2 (subgroup C). It is not well understood to what extent observations on the Ad2 E3-19K/MHC I association can be generalized to E3-19K proteins of other serotypes and subgroups. The low levels of amino acid sequence homology between E3-19K proteins suggest that these proteins are likely to manifest distinct MHC I binding properties. This information is important as the E3-19K/MHC I interaction is thought to play a critical role in enabling Ads to cause persistent infections. Here, we characterized interaction between E3-19K proteins of serotypes 7 and 35 (subgroup B), 5 (subgroup C), 37 (subgroup D), and 4 (subgroup E) and a panel of HLA-A, -B, and -C molecules using native gel, surface plasmon resonance (SPR), and flow cytometry. Results show that all E3-19K proteins exhibited allele specificity toward HLA-A and -B molecules; this was less evident for Ad37 E3-19K. The allele specificity for HLA-A molecules was remarkably similar for different serotypes of subgroup B as well as subgroup C. Interestingly, all E3-19K proteins characterized also exhibited MHC I locus specificity. Importantly, we show that Lys91 in the conserved region of Ad2 E3-19K targets the C terminus of the α2-helix (MHC residue 177) on MHC class I molecules. From our data, we propose a model of interaction between E3-19K and MHC class I molecules. Ads 2 are widespread in the human population, with at least 51 human serotypes (Ad1 to Ad51) known that are classified into six subgroups (A-F) (1,2). Following primary infection of the respiratory tract, Ads of subgroup C (the most commonly studied subgroup) can be recovered from stool for months, and even years, after the virus is no longer detected in nasopharyngeal specimens (3,4). It is believed that this is a manifestation of the ability of Ads to counteract host antiviral immune re-sponses and establish persistent infections. Diseases caused by Ad infections are generally mild except in immunosuppressed individuals, such as AIDS patients and transplant recipients, in which case Ad infections can be fatal (5)(6)(7)(8). It was first shown that Ad2 could drastically decrease the expression of MHC class I molecules on Ad2-infected cells (9 -11). The E3-19K protein of Ad2 was found to be specifically responsible for this effect (11). It is now understood that E3-19K binds directly to and retains MHC class I molecules in the ER. Consequently, E3-19K down-regulates MHC class I molecules on Ad-infected cells and protects cells from recognition and lysis by allospecific CTLs (9 -17). E3-19K-mediated suppressing effects on CTL activity were shown in Ad-infected human and mouse cells (15,16,18). Moreover, in vivo data support a role for E3-19K in Ad infections (19); lungs of cotton rats infected with a mutant Ad containing a deletion in the gene encoding E3-19K caused a more severe immunopathology than lungs infected with wild-type Ad. It was suggested that the absence of E3-19K in the mutant virus activated CTLs as part of the inflammatory response to the infection (19). Thus, there is convincing evidence from in vitro and in vivo studies that E3-19K, through its association with MHC class I molecules, facilitates the undetected replication of the virus in infected host cells. This is thought to contribute to the ability of Ads to establish and maintain persistence (19 -21). E3-19K is a type I transmembrane glycoprotein that includes an N-terminal ER lumenal domain and a short C-terminal cytosolic tail. The ER lumenal domain of E3-19K binds with high affinity to the ER lumenal domain of MHC class I molecules (12,(22)(23)(24)(25)(26), and the dilysine motif in the cytosolic tail of E3-19K provides the signal for localization in the ER (12,27,28). The ER lumenal domain of E3-19K has been subdivided into three regions with loosely defined boundaries (26,29) as follows: 1) residues 1 to ϳ78/81 are rather variable between E3-19K proteins of different subgroups; 2) residues ϳ79/82 to 98 are rather conserved between E3-19K proteins of different subgroups; and 3) residues 99 to 107 link the ER lumenal domain to the transmembrane domain. To date, we have a limited understanding of how the variable and conserved regions of E3-19K are involved in targeting MHC class I molecules. Similarly, we have a weak understanding of how the low levels of sequence homology (as low as ϳ34%) between E3-19K proteins of different serotypes and subgroups affect their immunomodulatory function. It is reasonable to assume that such significant differences in sequences will translate into differential MHC I binding properties. Consistent with this view, we showed previously that Ad2 E3-19K associates with MHC class I molecules in an allele-and locus-specific manner (30,31). It was also reported that Ad2 E3-19K co-immunoprecipitated more readily with HLA-A2 than HLA-B7 (32). Furthermore, it was shown previously that Ad2 and Ad19a E3-19K proteins of subgroups C and D, respectively, differ in their capacity to inhibit the trafficking of MHC class I molecules to the cell surface (12). This kind of information on the MHC I binding function of E3-19K proteins of different Ad serotypes and subgroups is critical not only for our understanding of the structure/function relationship of E3-19K but also because it is thought that the ability of a particular Ad serotype to replicate and establish persistent infection in host cells may depend on the avidity of its E3-19K protein for the HLA molecules present in the infected individual (21,33). There is therefore an epidemiological component to our understanding of Ad pathogenesis that can be probed from a systematic study of E3-19K function across Ad serotypes and subgroups. In this study, we carried out a systematic analysis of interaction between E3-19K proteins of Ad serotypes 7 and 35 (subgroup B), 5 (subgroup C), 37 (subgroup D), and 4 (subgroup E) and HLA-A (A*0301, A*1101, A*3101, A*3301, and Aw*6801), HLA-B (B*0702 and B*0801), and HLA-C (Cw*0304 and Cw*0401) molecules using site-directed mutagenesis, native gel, SPR, and flow cytometry. A molecular and cellular characterization of E3-19K proteins across serotypes and subgroups will provide a more in-depth understanding of its immunomodulatory function and will also generate knowledge relevant for advances in Ad pathogenesis. The cDNA of full-length HLA-A*1101[E177K] heavy chain was generated by the QuikChange approach using the plasmid pCR2.1-TOPO/HLA-A*1101 (a gift from Dr. P. Parham, Stanford University School of Medicine, Stanford, CA) as template, followed by insertion into the HindIII and XbaI restriction sites of the pcDNA3.1 vector (Invitrogen). The plasmid was linearized with BglII followed by transfection into C1R cells by electroporation as described previously (31). Stable transfectants expressing HLA-A*1101[E177K] were isolated by selection in 600 g/ml G418 for 2-3 weeks. The cell-surface expression of HLA-A*1101[E177K] was determined by flow cytometry (see below). Viruses and Infection-Ad2 and H2dl801 (H2dl801 has a deletion in the gene encoding Ad2 E3-19K (37)) (a gift from Dr. W. Wold) were grown in HeLa S3 cells as described previously (31). Viruses were purified using the Adeno-X virus purification kit (Clontech) as described previously (31). In Vitro Reconstitution of MHC Class I Molecules-Recombinant, soluble MHC class I molecules were assembled from the inclusion bodies of class I heavy chains and ␤ 2 m (or biotinylated ␤ 2 m) in the presence of synthetic peptides in an oxidative refolding buffer (35). Refolded ␤ 2 m was biotinylated as described previously (31). The MHC I-restricted peptides used for refolding have been described previously (31). Proteins were purified on a Superdex 200 HR 10/30 column in 20 mM Tris and 150 mM NaCl (pH 7.5). Native Gel Band Shift Assay-E3-19K proteins (14 g) were incubated with MHC class I molecules (20 g) (2:1 molar ratio) on ice in 20 mM Tris and 150 mM NaCl (pH 7.5) for 30 min. The Surface Plasmon Resonance-Quantitative analyses of interaction in E3-19K/MHC I pairs were performed using SPR at 20°C on a Biacore T100 instrument as described previously (31). Biotinylated MHC class I molecules were immobilized on a streptavidin-coated sensor chip (Series S Sensor chip SA, certified) at a density of ϳ750 -800 resonance units using a stock solution (ϳ30 nM) in HBS-EP (10 mM Hepes, 150 mM NaCl, 3 mM EDTA, and 0.05% Surfactant P-20 (pH 7.4)) at a flow rate of 60 l/min. Uncoupled streptavidin sites were blocked with biotin (30 l/min for 1 min). Solutions of E3-19K proteins (2.5 nM to 10 M) were injected over the sample, and control flow cells (60 l/min for 4 min) and binding responses (with control responses automatically subtracted) were recorded. Binding surfaces were regenerated with HBS-EP. Equilibrium dissociation constants were determined by plotting binding responses at steady state versus E3-19K concentrations based on a 1:1 Langmuir binding model using the Biacore T100 evaluation software (version 2.0.1, Biacore). Qualitative and Quantitative Analysis of Interaction between E3-19K Proteins of Different Serotypes and Subgroups and MHC Class I Molecules-Interaction between E3-19K proteins of subgroups B (Ad7 and Ad35), C (Ad5), D (Ad37), and E (Ad4) and HLA-A (A*0301, A*1101, A*3101, A*3301, and Aw*6801), -B (B*0702 and B*0801), and -C (Cw*0304 and Cw*0401) molecules was monitored by a native gel band shift assay and SPR ( Fig. 2 and Table 1, respectively). Results show that Ad37 E3-19K interacts with HLA-A molecules ( Fig. 2A) as evidenced by new bands on the native gel that migrate at different positions from those of uncomplexed Ad37 E3-19K and HLA-A molecules. A quantitative analysis of interaction in Ad37 E3-19K/HLA-A complexes by SPR yielded equilibrium dissociation constants (K d ) of ϳ240 nM (Table 1). Interaction between E3-19K proteins of subgroups B, C, and E and the same HLA-A molecules was also determined by SPR (Table 1). The K d values clearly indicate that these E3-19K proteins display allele-specific interaction with HLA-A molecules, a binding property that was less apparent for Ad37 E3-19K (at least with this particular set of HLA-A molecules). Interestingly, this allele specificity is remarkably similar for different serotypes of subgroup B (compare Ad7 with Ad35 E3-19K) as well as subgroup C (compare Ad2 with Ad5 E3-19K). Moreover, results from SPR show that, overall, Ad37 E3-19K of subgroup D exhibits the weakest binding affinities (highest K d values) for HLA-A molecules, and Ad4 E3-19K of subgroup E displays the strongest binding affinities. The characterization of Ad37 E3-19K was extended to include interaction with HLA-B and -C molecules. Results from native gel (Fig. 2B) revealed that interaction between Ad37 E3-19K and HLA-B*0702 led to a protein smear with a very faint band at the expected position of a complex. In contrast, no new band could be seen on the gel for the mixture of Ad37 E3-19K and HLA-B*0801, although a weak smear of the HLA-B*0801 band could be observed. These results are consistent with Ad37 E3-19K interacting, at best, very weakly with HLA-B molecules causing complexes to dissociate during electrophoresis. Analysis of interaction with HLA-C molecules failed to produce new bands on the gel (Fig. 2B). Instead, intense and focused bands corresponding to uncomplexed Ad37 E3-19K and HLA-C molecules were observed, which is consistent with a lack of complex formation. Finally, interaction between Ad37 E3-19K and HLA-B molecules could not be determined reliably by SPR because of weak binding responses, and no interaction could be detected with HLA-Cw*0304 (Table 1). Taken together, these results show that Ad37 E3-19K manifests locusspecific interaction with MHC class I molecules, i.e. higher avidity with HLA-A relative to -B molecules and no interaction with HLA-C molecules. An SPR analysis of interaction between E3-19K proteins of subgroups B, C, and E and the same HLA-B and -C molecules (Table 1) revealed allele-specific interaction with HLA-B molecules and MHC I locus-specific interaction, as observed for Ad37 E3-19K. Overall, results from native gel and SPR showed that E3-19K proteins of subgroups B-E exhibit allele-specific interaction (this was less apparent for Ad37 E3-19K of sub- group D) and MHC I locus specificity (higher avidity with HLA-A relative to -B molecules and no interaction with HLA-C molecules). MHC Residue 177 Modulates Differentially the Immunomodulatory Function of E3-19K Proteins of Different Serotypes and Subgroups-Early studies have identified putative E3-19Kbinding sites at both the N terminus of the ␣1-helix and the C terminus of the ␣2-helix (23,24). Consistent with this, we showed previously that MHC residue 56, located at the N-terminal end of the ␣1-helix (Fig. 3), critically influences the immunomodulatory function of Ad2 E3-19K toward HLA-A molecules (31). Here, we sought to examine whether MHC residue 177 located at the C-terminal end of the ␣2-helix (Fig. 3) also plays a role in modulating interaction with E3-19K. Importantly, MHC 177 is a conserved and solvent-exposed Glu residue that occupies a position structurally equivalent to that of MHC 56, i.e. the end of an ␣-helical segment. These arguments make the negatively charged Glu 177 a potential "hot spot" for E3-19K interaction on MHC class I molecules. To assess the role of MHC residue 177, we introduced a Glu 177 to Lys 177 mutation in HLA-A*1101 heavy chain and monitored interaction first with Ad2 E3-19K (subgroup C) by native gel (Fig. 4A). Results clearly show that the E177K mutation in HLA-A*1101 abolished interaction with Ad2 E3-19K, relative to interaction with HLA-A*1101, as evidenced by the lack of a complex band and the presence of a strong band corresponding to uncomplexed HLA-A*1101[E177K]. We extended this analysis to include Ad7 and Ad35 (subgroup B), Ad5 (subgroup C), Ad37 (subgroup D), and Ad4 (subgroup E) E3-19K proteins (Fig. 4A). Results show that Ad5, Ad37, and Ad4 E3-19K proteins are also affected, most likely to a different extent, by the E177K mutation in HLA-A*1101 as indicated by showing residue 56 (Gly 56 in HLA-A*1101 and solvent-exposed Arg 56 in HLA-A*3101) at the N-terminal end of the ␣1-helix and conserved solventexposed residue Glu 177 at the C-terminal end of the ␣2-helix. The conserved solvent-exposed residues Glu 53 and Glu 173 are also labeled. These residues represent potential interaction sites with conserved residues in E3-19K proteins. The backbone-to-backbone distance between residues 56 and 177 is 19.8 Å. The bound peptide is omitted in the groove and only C␣ atoms are shown (except for the side chain of Glu 177 ). The N and C termini are indicated. (Table 1). Thus, overall, results from SPR extend those from native gel and provide convincing evidence that MHC residue 177 in HLA-A*1101 is critical for interaction with E3-19K proteins and that the extent of its modulating effect differs with serotypes and subgroups. To show that the effect of MHC residue 177 on interaction with Ad2 E3-19K has functional consequences in cells, Ad2infected C1R cells stably expressing HLA-A*1101[E177K] were characterized by flow cytometry (Fig. 4B). The cell-surface expression of HLA-A*1101[E177K] on infected C1R cells, relative to mock-infected cells, was determined to be 83.6%. This is significantly higher than the cell-surface expression of HLA-A*1101, which was determined to be 25.2% under identical conditions. This effect on MHC I expression was specific to E3-19K as infection of C1R cells with H2dl801 virus lacking the gene encoding Ad2 E3-19K (37) had essentially no effect on cellsurface expression of HLA-A*1101[E177K] and HLA-A*1101 (Fig. 4B). Taken together, the significantly weaker affinity of Ad2 E3-19K for HLA-A*1101[E177K] ( Table 1), relative to HLA-A*1101, allowed the mutant to escape retention in the ER of infected C1R cells and, consequently, be expressed at significantly higher levels on the cell surface. Overall, results from native gel, SPR, and flow cytometry unambiguously show that MHC residue 177 in HLA-A*1101 critically influences the immunomodulatory function of E3-19K proteins in a way that varies according to serotypes and subgroups. Thus, in addition to MHC 56, we have identified MHC 177 as another important E3-19K-binding site on MHC class I molecules. MHC Residues 56 and 177, Insights into the E3-19K-Binding Mode-A close examination of our in vitro binding data (Table 1) shows that E3-19K proteins of subgroups B, D, and E, but not those of subgroup C, display rather similar binding affinities for HLA-A*3101 relative to other HLA-A molecules. It is the noticeably weaker affinity of Ad2 E3-19K of subgroup C for HLA-A*3101 (Table 1) Table 1). The K d values indicate that mutations at positions 56 (462 Ϯ 9.2 nM) and 177 (136.9 Ϯ 1.0 nM) each weakened interaction with Ad7 E3-19K relative to HLA-A*1101 (48.7 Ϯ 3.1 nM). Interestingly, the mutation at residue 56 had a noticeably more suppressing effect on interaction with E3-19K than the mutation at residue 177. Finally, the K d value for Ad7 E3-19K association with the double mutant HLA-A*1101[G56R,E177K] is 733 Ϯ 18 nM. This value is higher than the K d values of either of the single mutants and thus most likely reflects a combination of their individual suppressing effects. Taken together, these studies provide evidence that the Ad7 E3-19K/HLA-A*1101 interface includes interaction with both MHC residues 56 and 177 and that MHC 56 likely represents a more dominant contact site in the complex. Lys 91 in Ad2 E3-19K Targets the C Terminus of the ␣2-Helix on MHC Class I Molecules-To date, we have a rather weak understanding of how E3-19K uses its variable and conserved regions to associate with MHC class I molecules. This knowledge is important as it undoubtedly can help understand the molecular basis of differential interaction in E3-19K/MHC I pairs. In a very recent study (36), we showed that residues 89 -93 ( 89 MSKQY 93 ) in the conserved region of Ad2 E3-19K are essential for its immunomodulatory function. Here, to gain insights into where this critical stretch of residues from the conserved region preferentially binds on MHC class I molecules, i.e. at the N terminus of the ␣1-helix or the C terminus of the ␣2-helix (Fig. 3), we introduced a Lys 91 to Glu 91 mutation in Ad2 E3-19K and monitored interaction with HLA-A*1101 and HLA-A*3101 using a native gel band shift assay. First, note that Lys 91 is conserved (or substituted by Arg residue) in all E3-19K proteins (numbering of the equivalent Lys or Arg varies with serotypes) thus suggesting an important functional role for this residue. Second, as shown in Fig. 3, the N terminus of the ␣1-helix in HLA-A*1101 has Gly 56 , whereas that in HLA-A*3101 carries Arg 56 , and the C terminus of the ␣2-helix in both of these alleles carries Glu 177 . Finally, we showed previ-ously that Ad2 E3-19K binds strongly to HLA-A*1101 but considerably more weakly to HLA-A*3101 (see Table 1) (32). Taken together, an analysis of interaction between E3-19K[K91E] and HLA-A*1101 and HLA-A*3101 will likely allow us to distinguish just how E3-19K uses a critical stretch of residues from its conserved region to target MHC class I molecules, i.e. through interactions at the N terminus of the ␣1-helix or the C terminus of the ␣2-helix. The (Table 1); the considerably more weakly bound Ad2 E3-19K[K91E]/HLA-A*1101 complex allowed HLA-A*1101 to be expressed on the surface of C1R cells at significantly higher levels. A similar analysis with HLA-A*3101 showed that this allele is, however, expressed at similarly high levels on the surface of both Ad2 E3-19K/A31/C1R and Ad2 E3-19K[K91E]/ A31/C1R co-transfectants (Fig. 5B), 77.2 versus 83.2%, respectively. These results suggest that for intrinsically weak complexes such as the Ad2 E3-19K/HLA-A*3101 complex, effects that further weaken binding affinities do not necessarily translate into markedly higher levels of MHC I cell-surface expression. Taken together, these biochemical and functional analyses have provided evidence that Lys 91 in the conserved region of Ad2 E3-19K is functionally important and that this residue most likely associates with MHC class I molecules through interactions at the C terminus of the ␣2-helix rather than the N terminus of the ␣1-helix. DISCUSSION A key finding from our characterization of interaction in E3-19K/MHC I pairs is that E3-19K proteins of subgroups B, C, and E display allele-specific interaction with HLA-A and -B molecules; this effect was less obvious for Ad37 E3-19K of subgroup D (at least with this particular set of HLA-A and -B mol-ecules). In a previous study (31) (Table 1), we showed that Ad2 E3-19K displays allele-specific interaction with HLA-A molecules and, importantly, that this specificity correlates (in a negative way) with levels of MHC I expression on Ad2-infected C1R cells. This functional link is further supported here from our data on the differential association of Ad2 E3-19K with HLA-A*1101 relative to both HLA-A*1101[E177K] and Ad2 E3-19K[K91E] mutants. Thus, allele-specific interaction is a common property of E3-19K proteins, and binding affinities correlate (in a negative way) with levels of MHC I cell-surface expression on infected cells. As expected, our data also indicated that this correlation is less evident for E3-19K/MHC I pairs that have intrinsically weak affinities, i.e. intrinsically high levels of MHC I cell-surface expression. Our results also indicate that the allele specificity of E3-19K proteins for HLA-A molecules is remarkably similar for serotypes of subgroup B as well as subgroup C. Furthermore, our results indicated that Ad37 E3-19K of subgroup D has the weakest binding affinities, although Ad4 E3-19K of subgroup E has the highest binding affinities for HLA-A molecules. Interestingly, it was reported previously that immunoprecipitates of MHC class I molecules from Ad-infected 293 cells co-precipitated Ad2 E3-19K (subgroup C) but not Ad19a E3-19K (subgroup D) (40). Because the amino acid sequence of Ad19a E3-19K is identical to that of Ad37 E3-19K, this result is consistent with our finding that Ad37 E3-19K binds very weakly to MHC class I molecules. From these studies, we infer that the different MHC I binding properties of E3-19K proteins are the manifestation of differences in key amino acids at the E3-19K/ MHC I interface. Our binding data also showed that E3-19K proteins of subgroups B, C, D, and E display the same locus-specific interaction with MHC class I molecules, i.e. stronger affinities with HLA-A compared with HLA-B molecules, and no interaction with HLA-C molecules. This ability of E3-19K proteins to distinguish between MHC gene products may represent a survival mechanism evolved by Ads to selectively manipulate host CTL and natural killer (NK) cell functions. Indeed, given that HLA-A and -B molecules present viral peptides specifically to receptors on CTLs and that HLA-C molecules present viral peptides specifically to inhibitory receptors on NK cells (41), the MHC I locus specificity of E3-19K proteins may predispose Ads to avoid clearance by both CTLs and NK cells. Other viral immunomodulatory proteins such as the Kaposi sarcoma-associated herpesvirus K5 protein (42) and Nef from HIV-1 (43) display the same MHC I locus specificity, a property that has been shown to protect virally infected cells from lysis by NK cells (42,43). Note that this potential inhibitory mechanism of NK cells is distinct from that proposed recently by McSharry et al. (44) in which E3-19K sequesters MHC I chain-related proteins A and B molecules, i.e. the ligands of the activating NK receptor NKG2D. From our binding and cell-based studies, we identified MHC residue 177 in HLA-A*1101 as a critical residue for the immunomodulatory function of E3-19K proteins. More specifically, we showed that the extent of MHC 177-modulating effects on E3-19K proteins varies with serotypes and subgroups, with serotypes of subgroup B being less sensitive to modulation than those of subgroups C-E. Studies of interaction between HLA-A*3101 and E3-19K proteins also allowed us to conclude that, in a manner similar to MHC 177, MHC 56 differentially modulates the MHC I binding function of E3-19K proteins. Importantly, a characterization of interaction between Ad7 E3-19K and the double mutant HLA-A*1101[G56R,E177K] showed that MHC residues 56 and 177 are both part of the Ad7 E3-19K/ HLA-A*1101 interface, with MHC 56 being a more dominant contact site. Interestingly, the backbone-to-backbone distance between MHC residues 56 and 177 (see Fig. 3) is 19.8 Å, and assuming that E3-19K adopts an Ig-like structure (45), its longest dimension would be roughly 33 Å (approximated from the x-ray structures of Ig-like proteins such as ␤ 2 m, US2, and the ␣3-domain of MHC class I heavy chain). Thus, it is entirely reasonable to suggest that the E3-19K structure can span the distance of 19.8 Å. This analysis further supports our experimental findings that MHC residues 56 and 177 are both contacts sites in the Ad7 E3-19K/HLA-A*1101 complex. On the basis of our data, we propose a model of interaction in E3-19K/MHC I pairs in which all E3-19K proteins establish evolutionally conserved interactions with MHC residues at both the N terminus of the ␣1-helix and the C terminus of the ␣2-helix of the peptide-binding groove. As suggested by our data, E3-19K proteins most likely use residues from its conserved region, such as the conserved Lys 91 , to mediate interaction with conserved MHC residues, such as Glu 177 , at the C-terminal end of the ␣2-helix. Note that there are at least two other conserved and negatively charged solvent-exposed MHC residues at the N-terminal end of the binding groove, namely Glu 53 and Glu 173 (see Fig. 3), that may also be important for interaction with conserved residues in E3-19K proteins. These evolutionally conserved interactions would provide a basal level of stabilizing energy to the E3-19K/MHC I association. In our model, we also suggest that these interactions are supplemented by other important contacts at both the N terminus of the ␣1-helix and the C terminus of the ␣2-helix of the groove that involves residues from the variable region of E3-19K. These contacts would contribute additional binding energy to the E3-19K/MHC I association, the extent of which can vary with serotypes and subgroups depending on the complementarity of the interacting surfaces. Consequently, and as supported by our data, to achieve optimal stabilizing interaction at the E3-19K/MHC I interface, either the N terminus of the ␣1-helix or the C terminus of the ␣2-helix will serve as a dominant binding region in different E3-19K/MHC I pairs. This view on the conserved and variable regions in the E3-19K/MHC I association accounts for why 1) E3-19K proteins of subgroup B or C, which have high levels of sequence homology in their variable regions, display similar allele specificity toward HLA-A molecules; and 2) E3-19K proteins of different subgroups, which show considerably lower levels of sequence homology in their variable regions, display distinctively different binding affinities (K d values) for a given HLA-A molecule. The critical role of MHC class I molecules in host antiviral immunity has put tremendous pressure on viruses to evolve escape mechanisms from immune surveillance. For Ads, this is achieved through the intriguing E3-19K protein that targets MHC class I molecules for retention in the ER. In this study, we have provided new information on the MHC I binding properties of E3-19K proteins of different serotypes and subgroups. From this, a picture is emerging on how E3-19K, with a unique arrangement of variable and conserved domains, mediates interaction with its cellular target. This knowledge contributes to move forward our understanding at the molecular level of E3-19K function. Also, our results showing major differences in the strength of interaction in different E3-19K/MHC I pairs more firmly establish the notion that this association plays an important role in the pathogenesis of Ads. In a clinical setting, information on Ad serotypes and subgroups and HLA background of immunocompromised patients is potentially useful to identify patients who may be at higher risks of developing Ad diseases.
2018-04-03T03:26:13.772Z
2011-03-25T00:00:00.000
{ "year": 2011, "sha1": "20502fb147b844cdda6edcdb289a7137cdf32352", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/286/20/17631.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "cabe9226782853b065da1a764ee4c85de3569e17", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
33274261
pes2o/s2orc
v3-fos-license
Food Cravings and Aversions during Pregnancy : A Current Snapshot Food cravings are very common during pregnancy, along with food aversions in many instances, yet their underlying causes are not well understood. Food cravings are usually met with the consumption of the craved food, which generally include unhealthy foods, that is, sweet/fat and salty/spicy foods. It is important to follow a healthy diet during pregnancy in order to promote healthy outcomes for both mother and infant, but with the consumption of unhealthy craved foods a healthy diet may be difficult to maintain. The objective of this survey was to obtain a current snapshot of food cravings during pregnancy, while also comparing differences between cravers and non-cravers. The results showed that 59% of surveyed mothers had experienced cravings. Having food cravings were associated with greater weight gain during pregnancy, but also an increased intent to breastfeed. Sweets as a category, along with fruits and vegetables, were recorded as being the most craved foods. Cravers of sweets tended to be of normal weight and were likelier to have given birth to girls. In contrast, women that craved fruits and vegetables were likelier to be overweight, and were more likely to have given birth to boys. While reported by nearly one in four of the mothers who craved fruits and vegetables, pica, or the craving of non-food items, was nearly nonexistent among the sweet cravers. These results support previous findings that food cravings remain as a normative phenomenon, but while evident for the majority of mothers, can vary tremendously in terms of foods that are preferred. Introduction Despite its being a natural experience, certain types of physiological stress may characterize pregnancy due to changes in metabolic states and hormone levels [1].Symptoms that are associated with physiological stress during pregnancy include food cravings, aversions, nausea, and vomiting [2,3].Specifically, a food craving is defined as "an intense desire to eat a specific food" [4].Numerous investigators throughout the world have studied the phenomenon over the past few decades, and find that food cravings are common during gestation, with estimates of some 60 to 85% of women reporting one or more when pregnant [5][6][7].As to foods craved, a study conducted in Saudi Arabia found that nearly 40 percent of pregnant women craved milk, sweets, dates, salty, or sour foods [8], while a study conducted in the United Kingdom reported that most of the participants had food cravings for sweet foods, fruit, and fruit juices [5].More recently, in a Tanzanian study it was found that about three-quarters of the women experienced cravings, mostly for meat, mangoes, yogurt, oranges, plantains, or soft drinks [2], while researchers on Yasawa Island, Fiji, reported fruit, particularly bananas and plantains, as most often craved [9]. Even though there is no clear etiology of food cravings, scientists suggest a number of factors that may help to explain them.Environmental characteristics, such as availability of foods and geographic factors, exert a large influence on the development of cravings [10].For example, Taggart [11] noted that two-thirds of her Scottish sample expressed a craving for "foods readily available, the most common being fruit".In contrast, a cross-section of Ethiopian women mostly craved livestock products, ostensibly because of their scarcity at the time of the study [12].A more curious form of craving is pica, that is, the purposeful consumption of nonfood substances, observed worldwide and posited to be somewhat biologically adaptive but also culturally based [13,14]. Cultural background and traditions can also greatly control behavior with actual foods during pregnancy [15].Cultural traditions tie into the availability of foods and preferences of food, and how daily life influences pregnancy and the chance of evolving food cravings.Physiological and psychological changes that occur during pregnancy also impact the possibility of exhibiting food cravings [16].For example, it has been found that there is a decrease in the sense of taste during pregnancy, so some expectant mothers may choose more flavored foods during their daily diet [17].Alternately, the spike in gestational hormones in mid-pregnancy that regulate the flow of glucose to the developing fetus has been proposed as a triggering mechanism for craving sweets [18].Finally, food cravings can also be formed by women's beliefs about what they should consume to ensure a healthy status for their baby.Often women eat certain foods believing that they are precautionary measures to satisfy nutritional needs of the unborn fetus [5,8]. Regarding their own health and that of the embryo, food aversions may serve the purpose of helping the pregnant female to avoid foods that can carry pathogens or chemical toxins, hence the oft-reported aversions to meat and fish [19,20], as well as to alcoholic and caffeinated beverages [21].Yet in this vein, some puzzling aversions have also been reported.For example, the avoidance of staple foods like cereal by Ethiopian women [12], or the fear by women of south India that fruits like mangoes and pineapples might cause miscarriage because they are believed to heat the mother's body or that kala and naval would cause black patches on the infant's skin [22]. A healthy diet is essential during pregnancy in order to increase the chances of healthy outcomes for both the mother and baby [23].Craving foods that are sugar-laden or processed with salt is contrary to an optimal diet during pregnancy.Therefore, the purpose of this study was to identify popular food groups craved within a U.S. sample, examine the differences between food cravers and non-cravers, and determine how cravers of varied food groups differ from each other.Give current recommendations for fruit and vegetable consumption, cravings for these foods were of particular interest.Data was therefore collected to catalog foods that were craved, daily intake of fruits, vegetables, and dairy products, and the level of nutrition education that was provided to each subject. Respondents The subject pool was comprised of women receiving postnatal care in a university teaching hospital in the northeastern United States.Mothers were recruited within two days after giving birth.Of the 234 mothers who were approached, 204 agreed to participate (87%).Participation in the study was voluntary, and all responses that were recorded were anonymous.The mean age of the mothers was approximately 30 years (29.96 ± 5.84) and they self-identified as one of five different racial/ ethnic groups: White (n = 50), Black (n = 36), Hispanic (n = 51), Asian (n = 48), and other (n = 19). Instrument The questionnaire, developed for this study, asked about participants' age, ethnicity, weight and height before pregnancy and weight gain during pregnancy.The respondents also indicated their infants' gender, birth weight and length, and described any food/nonfood cravings they had experienced during their pregnancies, including particular types of foods they had craved.Respondents also reported if they had incidents of nausea or vomiting during pregnancy.Women were asked to describe any cravings they experienced in previous pregnancies, if remembered.The survey also included questions about daily diet, particularly the presence of whole grains, the number of servings of fruit, the number of servings of vegetables, the number of servings of dairy products, and if the participants had received any nutrition education during their pregnancy. Procedure The study protocol was approved by the Institutional Review Board of the university teaching hospital where the study took place, with data collected in the maternity ward during the summer and fall semesters.A bilingual researcher (English and Spanish) visited women individually and invited them to participate in the study on a voluntary basis.Each mother was interviewed, face-to-face, for an average of seven minutes in her hospital room. Results Means and frequencies for the survey variables were computed with independent sample t-tests subsequently conducted using the Statistical Package for Social Sciences (SPSS, version 19.0).As shown in Table 1, as a whole the mothers could be classified as borderline overweight based on their BMI of 25.54, with the average weight gained during pregnancy at about 31 pounds.More than half of the mothers (~54%) had experienced a previous pregnancy.Sixty-four percent of the mothers experienced morning sickness during pregnancy, and only 39% of the patients received nutrition education while being pregnant.Overall, the incidence of nonfood cravings was low.Regarding mothers' daily diet, on average, they ate about three servings of fruits and vegetables and two servings of dairy products.Newborns weighed on average 7-pounds, 8-ounces, and their length was approximately 19¾ inches. Approximately 59% of the mothers (120/204) experienced cravings during pregnancy.Cravers and non-cravers were first compared in their answers to the questionnaire.As shown in Table 2, women who reported cravings gained more weight (~33 lbs.) than non-cravers (~28 lbs.) and were more likely to have food aversions (53.3% vs. 25.0%).They also consumed more dairy products on a daily basis (2.44 vs. 2.08).The results also show that just over 76% mothers who experienced cravings during their previous pregnancies were likely to have repeated cravings during their current pregnancy, as compared to about 30% of noncravers.Likewise, 95% of cravers planned to breastfeed their infants compared to about 79% percent of non-cravers. A listing of all foods that were craved and for which they had aversions was next created.While mothers were free to name as many craved foods (as well as aversions) that they wished, only the first food they mentioned was considered for the present analysis (The listing of all foods that were craved is available from the authors).These food items were clustered into groups as appropriate, with separate categories for certain foods that were mentioned frequently (e.g., watermelon, dill pickles).As shown in Table 3, the most popular items craved were ice cream and other sweets, watermelon, tropical fruits, and fruits/juices. Based on their conceptual similarities, and to produce nearly even sub-samples, two food categories were created: a Sweets category (sweets, ice cream, and chocolate) and a Fruits and Vegetables (F&V) category (fruits/juices, watermelon, tropical fruits, and vegetables).Responses for the 34 mothers who craved F&V were then compared to the 36 mothers who craved Sweets. As shown in Table 4, mothers who craved F&V averaged a BMI of 26.56, which placed them in the overweight category, and nearly 61% had given birth to boys.At 23.5%, they were also much more likely to have exhibited pica behavior, which was reported by only one of the Sweets craving respondents.Mothers who craved Sweets averaged a BMI of 23.61, or normal weight status, with about 58% having delivered girls.For well over half of the Sweet cravers (58%) this was their first child, in contrast to only about a third of the F&V cravers (32%) for whom this baby was their first. Discussion Food cravings are a common phenomenon during pregnancy.As the present results showed, 59% of the mothers we surveyed reported cravings.This is fairly consistent with other reports that show percentages that exceed 60% [6,7,24].It has often been posited that specific food cravings may be a response to a nutritional deficiency.However, a number of investigations do not support a relation between dietary intake and cravings, and thus do not conclude that nutritional deficiencies result in cravings [16,17].In fact, sensory monotony of diet was shown to predict food cravings better than nutritional deprivation [25]. It is generally acknowledged that the most commonly craved foods during pregnancy are chocolate, chips, citrus fruits, pickles, and ice cream [23].The present results are relatively consonant with that list, as the most demanded foods were sweets, especially ice cream, and F&V, though watermelon and tropical fruits were named more frequently than were citrus fruits.Bayley and colleagues [5] noted fruit, fruit juices, and sweets as most craved, while Orloff and Hormes [3] cited sweets (e.g., chocolate), carbohydrates (e.g., pizza), and animal protein (e.g., steak) at the top of their list.While only one mother in this study mentioned chocolate first, seven others did include chocolate when recounting all of their cravings.Coincidently, the proportion of woman who craved F&V almost equaled those who craved Sweets.Previous studies show that cravings for high-energy/low-nutrient dense foods like sweets may be explained by the lower cost of these foods, their wide availability, and marketing campaigns [26].Hence, the craving of F&V by nearly as many women, despite more expense and less promotion, is a positive finding. A higher proportion of boys were born to F&V cravers, while more girls were born to cravers of Sweets.To the best of our knowledge, the relationship of food cravings during pregnancy to newborns' gender has not been investigated.Similarly, it is not obvious as to why cravers of Sweets had a lower incidence of nonfood cravings.One hypothesis may be that sweets are higher in calories, so that cravers might achieve a satiety level more quickly, while the F&V group of cravers did not feel full as rapidly after eating F&V.Thus, F&V cravers may have looked for additional sources of oral gratification, and in this sample, by primarily chewing on ice.Again, this correlation has not been investigated by other researchers.Another observation was that in general the mothers who had experienced cravings, not exclusively cravers of F&V, were more likely to have food aversions, which is supported by other studies [5,9,10,17].Food aversions were tabulated and it was found that high protein foods like chicken and fish as well as fatty processed foods were most prominent for this sample.In contrast, some have reported that meats or other high protein foods were most often craved [12,27].It is noteworthy that no respondent named a sweet food or any fruit for that matter, as a food for which she had an aversion. Mothers who craved F&V had a higher mean BMI in pre-pregnancy than the mothers who craved Sweets.Since cravers of Sweets were leaner before pregnancy, albeit at healthy weight, it is possible that their desire for such higher energy foods was due to the need to increase total daily calories [10].It is recommended that every pregnant woman add about 300 calories per day during the second and third trimester of pregnancy [23].Both subgroups of cravers gained about 32 pounds during pregnancy, a greater amount than the 28 pounds reported by non-cravers. Even though weight gained during pregnancy by this sample of mothers appeared to be optimal, their mean pre-pregnancy BMI of just over 25 would place half of the mothers into the overweight category.Some reports suggest that approximately 50% of American women are overweight prior to pregnancy, and that more than 50% gain an excessive amount of weight during pregnancy [3].Excess gestational weight gain is a strong predictor of fetal macrosomia [28] and overweight in children [29].Another drawback of excessive gestational weight gain is that overweight mothers usually have greater difficulty initiating breastfeeding, and in turn may shorten the duration of breastfeeding due to feelings of discouragement [23,30].As nearly 95% of the mothers with cravings intended to breastfeed, being excessively overweight would have serious consequences.Of particular relevance here, recent research has identified food cravings as possibly having a role in excess gestational weight gain [31].The risk for overweight points to the need for women to receive nutrition education during their pregnancy if not before conception.Fewer than two in five of the surveyed mothers reported that they had any nutrition education with most who had saying they educated themselves by browsing the internet and reading books.Nutrition education should be of higher quality and taught by health professionals as it is important that pregnant women learn information based on facts rather than popular media.The main key points of every nutrition education program should be focused on maintaining a healthy lifestyle before and during pregnancy, learning recommended weight gain, and learning foods to be avoided and consumed.Women should be educated on how to replenish nutrient stores after delivery, reduce chances of developing chronic diseases, and how to prevent problems in later pregnancies [23].Most importantly, nutrition education should target obesity prevention to increase the number of women who are of normal weight in the pre-pregnancy period. Another reason for educating pregnant women about nutrition is that mothers may not be eating enough of healthy foods.As shown in our results, mothers on average consumed merely three servings of combined F&V per day while pregnant whereas the 2015-2020 Dietary Guidelines for Americans [32] recommend four servings of fruits and five servings of vegetables.Even the women who craved F&V reported less than four servings a day.Moreover, low F&V intake is likely to decrease further after delivery.Specifically for low-income women, the pattern consists of F&V intake decreasing after birth, with intake of fat and added sugar increasing after birth.Thus, postpartum women may be at risk for even lower overall nutrient status [26].Nutrition education targeted at maternal diet and supplement intakes during pregnancy has been shown to improve a variety of maternal and neonatal health indicators [33]. The present results are limited, to be sure, by our reliance on a convenience sample.Nevertheless, the inclusion of Black, Hispanic and Asian mothers in addition to White respondents speaks to its representativeness.To that end, our findings add to the literature that food cravings are common among pregnant women with the most craved foods including sweets and F&V.Acknowledging that food cravings are real is important both for mothers and health professionals.While craving and consuming F&V is beneficial for mother's health, sweets are to be consumed with caution, especially when mothers have chronic diseases.Doctors, dietitians, and mothers themselves can use the results to make healthier dietary choices during pregnancy and especially in efforts to slow the epidemic of obesity.This study showed that many women go into pregnancy being overweight, which might put both the mother and infant in danger.Thus, there is a need for women who are pregnant or are of childbearing age to look for professional nutrition education during the pregnancy period to learn how to adopt healthy dietary habits. Table 3 : Frequency of craved foods named first.
2017-10-10T20:27:33.696Z
2017-06-12T00:00:00.000
{ "year": 2017, "sha1": "b7cab90b469ea70afc3c0f07987739f445c55ea9", "oa_license": "CCBY", "oa_url": "https://www.elynsgroup.com/article-download/food-cravings-and-aversions-during-pregnancy-a-current-snapshot", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "74b047b18e0ae9c67b9fcd44e59286bb2ab2c39f", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
254309903
pes2o/s2orc
v3-fos-license
Palliative Medicine: 10 years as an area of medical practice in Brazil The World Health Organization (WHO) first defined Palliative Care (PC) in 1990, involving the practice as one of the focuses of cancer care (prevention, diagnosis, treatment, and PC). Since then, the concept was broadened in 2002 to include any life-threatening disease throughout its entire course, especially, but not only, in advanced and terminal diseases and end-of-life care1. In 2014, WHO qualified this practice so that its principles of action should be guiding for any good health practice2, leading to a new definition in 20173. Since then, the concept has been broadened even more, involving aspects of human care in situations of suffering related to social vulnerability conditions and pandemics4. In 2020, a group of experts gathered by the International Association of Hospice and Palliative Care (IAHPC) formulated the most recent and comprehensive definition involving this practice3. Despite the successive and recent reformulations of its definition and the large increase in the number of scientific publications in the field, PC has not been recognized as a medical specialization in several countries around the world in spite of the UK and other countries’ recognition since 19875. Development in the Brazilian context In Brazil, the practice of PC is described from individual initiatives and in a punctual way since the 1980s 6 . In 1997, with the creation of the Brazilian Palliative Care Association (ABCP), an entity with a multiprofessional character, the professionals involved in the practice started to organize themselves. They discussed their actions and preliminary aspects of the so-called "Brazilian Palliativist Movement", holding the first National Congress in the field in 2004 at the headquarters of the National Cancer Institute, in Rio de Janeiro. Then, a group of professionals, understanding that the regulation of this practice was necessary and that this depended on articulation with the medical entities in the country, founded the National Academy of Palliative Care (ANCP) on February 26, 2005. The association was composed of multiple health professionals but directed only by physicians, a mandatory requirement for this articulation at the time. This allowed greater visibility and insertion in the then recently created Technical Chamber of Terminality of Life and Palliative Care of the Federal Council of Medicine (CFM), active in the preparation of important documents such as the resolution 1.805/2006 7 and the first version of the New Code of Medical Ethics (2009/2010), which was the first to textually mention the term "Palliative Care" in the Fundamental Principle number XXII and the articles 36 and 41. The text was maintained in its entirety in the most recent edition of the Code of Medical Ethics 8 . Palliative Medicine: a new area of medical practice In Brazil, the inclusion of an area of knowledge as a recognized specialization is a process that involves specific rules and prerequisites regulated by competent medical bodies, especially the CFM. In the context of the growing practice and gradual representation in decisive medical regulatory boards in the country, it was time, in mid-2010, for the official recognition of Palliative Medicine as an area of medical practice. As a first step, the National Commission on Palliative Medicine was created by the Brazilian Medical Association (AMB). The first meeting of the commission took place on March 30, 2010 with the representation of the medical specialties who understood that Palliative Medicine was an area of practice related to their practice, after an open consultation, by letter, made by the AMB to the boards of all medical specialties. On this occasion, the societies that initially manifested themselves were Oncology, Internal Medicine, Geriatrics, Family and Community Medicine, and Pediatrics, besides ANCP itself. Later, the Anesthesiology Society started to compose the group of specialties considered prerequisites to obtain the title of Palliative Medicine as an area of practice. Rev Assoc Med Bras 2022;68(12):1607-1610 This Commission was responsible for requesting to the Mixed Committee of Specialties, constituted by the CFM and AMB, the approval, in August 2011, of the request so that Palliative Medicine could integrate the list of recognized areas of practice in Brazil, which, according to the recent update, resolution 2221/2018 that ratifies the CME 01/2018 ordinance, lists the existence of 57 areas of medical practice in the country 9 . On May 18, 2012, by the letter OF/AMB/0117/2012, the AMB informed the six societies of the first call for the selection of candidates, by curriculum analysis, requiring that the candidates had documented experience in Palliative Medicine for at least 5 years and that they were certified by the AMB in one of the areas listed as prerequisites at the time. In this initial selection process, 45 physicians (20 anesthesiologists, 9 pediatricians, 7 geriatricians, 7 internists, and 2 family physicians) were qualified. From 2013, upon request to the AMB, the ANCP became part, with two representatives, of the board responsible for preparing the exam for new candidates, now by written test and curriculum analysis, along with the other six societies that had participated in the previous selection process. That year, the National Commission of Palliative Medicine decided to hold the sufficiency exams for the area of Pediatrics separately from the other areas. From that moment, the exam started to be held periodically by the AMB in a place predefined by the entity or during the ANCP Palliative Care Congresses, according to the entity's calendar. The number of medical specialties considered prerequisites to obtain the title has increased progressively, so that it currently amounts 12 (Anesthesiology, Internal Medicine, Head and Neck Surgery, Oncological Surgery, Geriatrics, Mastology, Family and Community Medicine, Intensive Care Medicine, Nephrology, Neurology, Clinical Oncology, and Pediatrics). In the year 2021, Brazil had 389 medical professionals certified by the AMB in this area of practice 10 . Medical residency in Palliative Medicine: a requirement As part of the providences required for the regulation of a new area of medical practice in Brazil, guidelines were elaborated for the registration of Medical Residency Programs in Palliative Medicine with the duration of one additional year to the residencies in the prerequisite areas previously listed 11 . These guidelines oriented the characteristics of the programs and the distribution of the workload among the different assistance modalities of Palliative Medicine in diverse scenarios of medical practice. Palliative Medicine as a specialty in Brazil: the state of the art Palliative Medicine refers only to the body of knowledge concerning the medical doctor. By concept, the adequate and competent application of PC practice requires an appropriate organization and training of all health care professionals. In this sense, some representative councils of different health professions are already beginning to recognize, from their criteria, that their professionals may be considered specialists or have an area of practice in PC. Nevertheless, regarding medical doctors, one cannot, by force of concept, speak of a professional specialty. Palliative Medicine is an area of practice for which, in Brazil, the candidate for the so-called "title" has only two options: either to undergo a written test prepared by the AMB or to complete 1 year of Medical Residency in Palliative Medicine in one of the programs duly registered and recognized by the MEC in the country. However, with the increasing visibility and growth of the area in the country, following the MEC norms 11 , a training alternative for any health professional has been configured with the emergence of lato sensu postgraduation courses in the area. The norms refer to on-site training with a minimum of 360 h and certified by a Higher Education Institution (HEI) registered with the MEC. In times of the COVID-19 pandemic, the transition to online training modalities, whether in hybrid-live format or distance learning with recorded classes, is being considered valid by the MEC as long as the minimum workload is respected and duly certified by a registered HEI. In the lato sensu postgraduation modality, there are a rapidly increasing number of Medical and Multiprofessional Specialization courses in Brazil. However, it is fundamental to understand that these courses confer to the professional the title of Academic Specialist, which is not recognized by the AMB and, therefore, is not equivalent to the title of Professional Specialist. More recently, the AMB has recognized that these specialization courses and other education courses (for physicians), with a minimum length of 1 year, may be computed in a more expressive way among the prerequisites to register for the sufficiency exam for obtaining the title of area of practice in Palliative Medicine. Palliative Medicine as a medical specialty: perspectives Considering the reality of Palliative Medicine as an area of medical practice in the country, there is an open perspective for the recognition of the area as a medical specialty. Although this is already a reality for several countries in the world, this process is still under discussion in Brazil. In this aspect, it is worth clarifying that Palliative Medicine comprehends a set of competencies and skills that all physicians must carry, as occurs with all other medical specialties, in graduation. However, the advances in technical and scientific knowledge in the area, the increasing number of publications and scientific events worldwide, and the already existing Brazilian medical ethic normative 8 regarding that PC must be offered in the context of advanced and terminal diseases highlight Palliative Medicine as an area of its own and with demands for a large number of specific technical and attitudinal competencies, which were recently recognized in Brazil 13 . The recognition of this situation and a compilation of specific competencies required of the physician in this practice, which cannot be contemplated in only 1 year of residency, are essential for the acknowledgment that Palliative Medicine has the requirements for its establishment as a new medical specialty in Brazil. Contributing to that are the recent establishment of the Ministerial Resolution 41/2018, of October 31, 2018 14 and four State Laws 15-18 that begin to provide the basis for the development of specific Health Policies aimed at the adequate provision of PC in Brazil, especially in the public context. In this sense, it was necessary to re-register all residency programs and to elaborate new pedagogical projects in order to start, in 2023, the already expanded 2-year residencies. CONCLUSION The process of increasing the organization of Palliative Medicine in Brazil is accelerated. However, the inclusion of PC training in the graduation of future health professionals, as well as the development of National Health Policies with universal access to PC are challenges of vital importance for the upcoming years.
2022-12-07T16:49:40.221Z
2022-12-05T00:00:00.000
{ "year": 2022, "sha1": "b191d6970b9a177572f610fae9b05415ec20f06f", "oa_license": "CCBYNC", "oa_url": "https://www.scielo.br/j/ramb/a/5BxzF8rpXrYhZStGyz6XLMx/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "eef38fde493282b55c0b921b6bc121d5bfed6ee2", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [] }
12330346
pes2o/s2orc
v3-fos-license
An infrequent case of intussusception caused by gastrointestinal stromal tumor in an adult patient Intussusception may occur anywhere in the gastrointestinal system. Unlike its idiopathic childhood counterpart, it is uncommon during adult life and a definitive cause is usually found; almost half of cases develop with malignancy. Gastrointestinal stromal tumors (GIST) originate from interstitial Cajal cells of the gastrointestinal tract. They more frequently occur in the stomach and small intestines, and often grow extraluminally, making it unlikely to cause an obstruction or bleeding. Presently described is an unusual instance of ileo-ileal intussusception due to GIST. I ntussusception is an entity that develops as a telescopic intubation of a proximal intestinal segment into a distal segment, and rarely (1%), may cause a mechanical obstruction in any region of the gastrointestinal system. In childhood, idiopathic etiology is typical, while in almost all adult patients, an etiological factor can be demonstrated. In nearly half of adult cases, intussusception develops secondary to a malignancy [1,2]. Symptoms of mechanical obstruction due to intussusception can occur acutely, or it may develop in an intermittent, chronic form secondary to spontaneous resolution of telescoped bowel segments and re-invagination over time. Gastrointestinal stromal tumors (GIST) originating from Cajal cells, which assume the task of a pacemaker in the interstitial area, may occasionally lead to intussusception [3]. Since GISTs have a tendency to grow into extraluminal space, they rarely invaginate. Currently described is an infrequently encountered case of intussusception secondary to GIST, which represents 0.4 % of all gastrointestinal tumors. CASE REPORT A 64-year-old male patient presented at the outpatient clinic of general surgery with complaints of abdominal pain, frequent belching, and weight loss for BBA]; proto-oncogene c-Kit: >50% ++ staining intensity. The lesion did not stain with CD34, smooth muscle actin, or desmin. S100: less than 10% of the cells were stained; Ki-67 score: low [1%]). DISCUSSION More than 90% of GISTs are seen in people aged >40 years. Although it is more frequently seen in men compared with women, incidence does not differ according to geographic region or ethnicity. [3]. It may cause bleeding or gastrointestinal symptoms. When it reaches a large size, in 20% of cases, intussusception can be seen, secondary to pain, palpable mass, and obstruction [4]. Intussusception that develops with the infolding of a proximal bowel segment into a distal segment is rarely seen in adult patients. Symptoms are typically nonspecific, including recurrent abdominal pain, nausea, and vomiting. In half of these patients, the lead point of telescopic infolding is malignant [2]. previous 3 to 4 months. The patient indicated that he had occasional episodes of nausea without vomiting, but no problem with defecation. His personal history did not reveal any known disease or malignant disease in his family history. On physical examination, the only remarkable abdominal finding was minimal tenderness in the right lower quadrant. Digital rectal examination findings were also unremarkable. Serum biochemical and hematological parameters were within normal limits (white blood cell count: 7700 K/uL, hemoglobin: 12.8 g/dL, alanine aminotransferase: 10 IU/L, aspartate aminotransferase: 13IU/L). Plain abdominal radiographs were unremarkable. Abdominal computed tomographs (CT) obtained to clarify reason for the tenderness localized in the lower abdominal quadrant revealed a mass lesion interpreted as plastron appendicitis in the area of the cecum or invagination (Figure 1). No abnormal finding was detected during colonoscopic or gastroscopic examination performed to detect intraluminal pathology. Laparoscopic exploration revealed invaginated bowel segments 30 cm proximal to the terminal ileum. However, due to the presence of diffuse adhesions, laparotomy was performed. The bowel segment was excised en bloc, and intestinal continuity was achieved with side-to-side ileo-ileal anastomosis (Figures 2, 3). The postoperative period was uneventful and the patient was discharged on postoperative fifth day. After completion of histopathological analysis, the patient was diagnosed as GIST, and he is currently monitored by the oncology clinic (tumor diameter: 8x5x5 cm; mitotic activity: decreased [5/50 In nearly one-third of patients, preoperative diagnosis can be made based on history and physical examination combined with imaging modalities [5]. On ultrasonograph, telescopic multilayered intestinal wall may be visualized as a "bull' s eye" or a "target sign." (Figure 1). Definitive diagnosis can be made based on intraoperative findings; however, oral or intravenous contrast-enhanced abdominal CT yields the most accurate result [6]. Usually, GISTs grow exophytically to the peripheral tissues and do not lead to the development of intussusception. However, since they retain the possibility -though rarely -like a pedunculated or sessile polyp, as was seen in this case, they can induce intussusception [7,8]. Histological differentiation of GIST tumor from intestinal mesenchymal tumor can be achieved with CD117 staining or demonstration of the presence of proto-oncogene c-Kit. A newly developed oral formulation of a selective thyrosine kinase inhibitor with small molecular structure, imatinib, has enabled the prevention of recurrent GISTs during the postoperative period. Before the introduction of imatinib treatment, surgical excision was the only treatment modality, but recurrence rate remained high [9]. When adjuvant imatinib was administered to high-risk patients following surgical excision of GIST, a marked prolongation of survival time was observed when compared with placebo [10,11]. Based on these results, the use of imatinib as an adjuvant treatment in high-risk patients after surgical resection of GIST was approved by US Food and Drug Administration (FDA) in 2008, and the European Medicines Agency (EMA) in 2009. In a recent randomized Phase III study that included 400 high-risk GIST patients, imatinib treatment for 3 years provided statistically significant improvement over 1-year imatinib treatment (recurrence-free survival: 65.6% vs 47.9%; p<0.001 and overall survival: 92% vs 81.7%; p<0.02) [12]. Based on these outcomes, concurrently, both the FDA and the EMA advised that adjuvant treatment be maintained for 36 months in high-risk patients. Since spread to regional lymph nodes is rarely seen, segmental resection is a sufficiently acceptable intervention in surgical procedure targeting the tumor, rather than peritumoral resection. Conclusion Intussusception is very rarely seen in adult patients. Compared to the idiopathic variant in observed in
2018-04-03T05:08:45.815Z
2017-08-26T00:00:00.000
{ "year": 2017, "sha1": "620e7bb4ef90bd9ac53474b459fa2705f80767a8", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.14744/nci.2015.53825", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "620e7bb4ef90bd9ac53474b459fa2705f80767a8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
255291671
pes2o/s2orc
v3-fos-license
A Review of In-Situ Techniques for Probing Active Sites and Mechanisms of Electrocatalytic Oxygen Reduction Reactions Highlights A comprehensive discussion of monitoring the structure evolutions of catalysts during ORR via multiple in-situ techniques to identify the active sites is presented. The extensive applications of in-situ techniques in elucidating the oxygen reduction reaction (ORR) mechanisms/pathways are reviewed. The challenges and recommendations of current in-situ techniques for monitoring the various dynamic evolutions in ORR are pointed out. Introduction Exploration of clean and sustainable energy sources, such as wind, solar, and hydroelectric power and the development of their associated electricity energy storage and conversion technologies are inevitable to undertake a secure and sustainable prospect [1]. For electricity storage and conversion technologies, electrochemical energy technologies including water electrolysis to produce hydrogen, fuel cells to convert hydrogen to electricity, metal-O 2 /air batteries to store/convert energy, etc., have been awaked as reliable, safe, efficient, environmental, and sustainable selections [2][3][4]. For fuel cells, such as proton exchange membrane fuel cells (PEMFCs), direct borohydride fuel cells (DBFCs), and metal-O 2 /air batteries such as lithium-O 2 /air (Li-O 2 / Air) battery, sodium-O 2 /air (Na-O 2 /Air) battery, as well as zinc-O 2 /air (Zn-O 2 /Air) battery, the oxygen reduction reaction (ORR) of their cathodes has been distinguished as the reaction dominating the performance of these devices owing to its slow reaction kinetics [5][6][7]. Therefore, some active and stable electrocatalysts coated on the cathodes are necessary to catalyze the ORR to a practical rate. Considering that the superior performance is markedly dependent on the active site of the catalyst [45,46], it is imperative to clarify the active ingredient and monitor the correlative evolution. Over the past decades, various usual ex-situ characterization techniques are applied to describe the phase, valence, electron transfer, coordination, and spin state of the catalyst before and after the electrocatalytic reaction to identify the active site and infer its variations. However, since the catalyst is prone to irreversible changes when exposed to air during transfer in ex-situ tests, the conclusions drawn may not match the reality and may cause misinterpretation [47]. More importantly, the catalyst always undergoes some dynamic structural evolution under practical conditions, leading to the evolution of active site, also increasing the difficulty of identification. Thus, the development of in-situ characterization techniques to monitor the catalyst structure evolution in real time is essential to determine the active sites, which can also reveal the rationality of the catalyst structure design and guide the synthesis of the highly active catalyst. In addition, despite a rational design of the catalyst with the highly active sites being crucial for efficient ORR, the reactions that occur during synthesis are not known, leading to a complex synthesis process, and exacerbating costs [48,49]. Thus, the visual monitoring of the structural morphological changes, including nucleation, growth, reconstruction, Ostwald ripening, etc., for the catalyst during the synthesis process is also essential to simplify the synthesis process and shorten the synthesis time, which cannot be ignored. In view of the ORR pathways are multiple, during which there are abundant and complex intermediates/products, and they are also related to the active site of the catalyst and are vital for the revelation of the ORR mechanism. However, extensive works have confirmed that they will also occur in a series of conversions and reconstructions under experiment conditions, resulting in them being hard to be distinguished accurately [50,51], and their transient evolutions are difficult to capture due to the fast rate of the catalytic reaction [52][53][54]. As well as, the postprocess nature of common ex-situ characterization techniques limit the recognition of their true changes in real time. Hence, the exploitation of in-situ characterization techniques to accurately determine the evolutionary behavior of the intermediates (adsorption/desorption) or products (such as nucleation, growth, and reconstruction) is very necessary and should be equally emphasized [48]. Expressly, in-situ techniques can reveal their dynamic changes during the reaction and do not require the dismantling of the test cell, thus obtaining more valuable information to reversely clarify the active site and intuitively elucidate the ORR mechanism [55][56][57]. Notably, the electrolyte anions have competitive adsorption features with intermediates on the catalyst surface, which can cause catalyst surface poisoning and thus affect the reaction pathway. Thus, the chemisorption of electrolyte anions on the catalyst surface is a non-negligible object to be detected during ORR, which can provide information on the dynamic evolution of the catalyst surface closer to the experimental conditions. Furthermore, to ensure accurate identification of active site and precise elucidation of ORR mechanism, a great deal of works have been devoted to combining theoretical calculations to assist in facilitating the assignment of detected signals [58,59]. Of course, the proper in-situ cell design is a guarantee for efficient in-situ detection and has also received a lot of attention [45,60]. And each technique has its own unique detection characteristics and limitations; therefore, the combination of techniques to provide more adequate and useful information has also been the focus of researchers. This work reviews the main advances in characterizing the catalytic processes of different catalysts in fuel cells and metal-O 2 /air batteries by using various in-situ techniques, and with a focus on Pt-based, M-N-C and some oxide catalysts ( Fig. 1). In detail, the discuss starts with the accepted ORR mechanism form the representative batteries, and listing possible intermediates and products involved in the reaction. The working principles of various in-situ techniques and their unique detection for ORR are briefly outlined. The direct in-situ characterizations of catalyst structure evolution during ORR are systematically summarized, while, in part, covering a discussion of its morphological monitoring in the synthesis process. Then, the focus is on outlining the dynamic evolution information of intermediates and products, and providing some chemisorption information of solvent anions. These two sections reveal the factors affecting the catalytic performance of catalysts from direct or indirect perspectives and thus guide the direction of optimization of their structural design. More importantly, these provide important guidance to clarify the active center and elucidate the ORR mechanism. In addition, the integration of theoretical calculations is further emphasized to promote in-situ signals assignment. The design of in-situ cells, and the coupling of several techniques are also covered to ensure accurate information acquisition. Finally, based on achievements and challenges of present in-situ techniques, some future research directions are proposed for overcoming challenges for a better understanding of the ORR mechanism. 2 Brief Overview of the ORR Mechanism and Intermediates Mechanism of ORR in Fuel Cells and Intermediates For different fuel cells, the difference is the anode reaction, the cathode reaction is the same, namely ORR. Taking PEMFC (typical one) as an example, its ORR occurs at the cathode, where O 2 get electrons to produce a series of oxygen-containing species [61,62]. Specifically, the oxygen molecules first diffuse to the catalysts surface to form adsorbed oxygen molecules (O 2 *, where * denotes the active site) (Fig. 2a) and capturing intermediates information are beneficial for elucidating the ORR mechanism. Notably, the peroxide pathway is the two-electron pathway, which only needs to overcome an energy barrier of 146 kJ mol −1 [23,63]. However, the peroxide intermediates are difficult to be completely reduced, and some may be reversibly decomposed to form oxygen molecules, resulting in low current efficiency [64,65]. Also, these intermediates can oxidize/corrode the catalytic active center and carbon carrier, thus significantly weakening the activity and stability of the catalyst. In contrast, the former two pathways both are the four-electron pathways, although with a higher energy barrier to overcome, but can efficiently convert oxygen molecules to H 2 O without producing oxidizing/ corrosive intermediates. Thus, it is essential to monitor the intermediate changes during the reaction and further clarify the catalytic pathway, which can help guide the design of highly active and stable catalyst (i.e., following a four-electron pathway not a two-electron pathway). Moreover, the adsorption/desorption process of oxygencontaining intermediates (e.g., O 2 *, *O, *OH, *OOH, and HOOH*) on the catalyst surface is the key to affecting the ORR kinetics [66,67]. The adsorption process of O 2 (i.e., O 2 → O 2 *), as the initial step, determines the course of the subsequent reaction steps. The adsorption/desorption of OH occurs at a high potential, which determines the initial potential at which the reaction begins to proceed. Also, the amount of OH adsorbed determines the number of available active centers, while the ability of adsorbed OH to bind with proton/electron further limits the reaction rate. More importantly, the adsorption energy of intermediates on the given catalyst surface is positively correlated with the ORR catalytic activity [68][69][70]. It is often considered as a catalytic activity descriptor, which can be obtained by the theoretical calculation. If the adsorption strength is too weak, it will limit the proton/electron transfer; conversely, too strong adsorption will cause desorption and become difficult. Therefore, it is necessary to combine theoretical calculations to guide the design of catalysts with moderate adsorption energy. Mechanism of ORR in Metal-O 2 /Air Batteries and Intermediates For a series of metal-O 2 /air batteries, the ORR is responsible for their discharge processes and undergoes similar reaction mechanism. Taking a typical Li-O 2 battery as an example (Fig. 2b), the dominant reaction equation is as follows: The forward direction is the discharge reaction, during which different kinds of intermediates (i.e., lithium oxides/ mixtures) and ideal Li 2 O 2 products can be produced by reduction reaction of O 2 [71][72][73]. Concretely, the first step is the O 2 combines one electron, resulting in superoxide ion (O 2 − ), and followed by meeting the Li + to form LiO 2 . Then, LiO 2 , as the intermediator, is thermodynamically unstable and can rapidly convert to Li 2 O 2 . Based on the degree of solvent solubilization of the cations produced by LiO 2 , the growth process of Li 2 O 2 can be divided into two categories [74][75][76][77]. Firstly, for electrode surface growth mechanism, in the weak Li + solubilized solution, LiO 2 will be adsorbed on catalysts surface of the electrode, forming LiO 2 *, Then, LiO 2 * undergoes disproportionation to form Li 2 O 2 * and O 2 ; or it receives a single e − and Li + from the electrode and electrolyte, respectively, and continues to be reduced to Li Succinct Overview of Various In-Situ Characterization Techniques In this section, the operational principles, unique characteristics, and drawbacks of each characterization-technique have been summed up. An in-depth comprehension of characteristics for each technique will facilitate its targeted use in the ORR dynamic monitoring process. Some summaries of related in-situ cells are also included. In-Situ X-Ray Diffraction (XRD) XRD, as a form of elastic scattering, can provide unique diffraction patterns for periodically structured materials, revealing their precise crystallographic information [47,79]. Concretely, the size, dimension, shape, and orientation of the crystal unit determine the location of the diffraction peak, and the type and location of the atoms determine the intensity of the diffraction peak [60,80]. In-situ XRD can detect the crystalline phases during ORR in real time, and the time-varying diffraction patterns can exhibit phase transition information on catalysts and intermediates. For real-time detection, the special in-situ cell needs to be assembled, such as, a tailor-made cell with a radial X-ray penetration window for XRD analysis was designed by Liu et al. [81]. Also, the experimental conditions were integrated in their cell (Fig. 3a), including MesoCoNC@ GF, Zn plate, Whatman glass microfiber filter, and KOH + zinc acetate, as the air cathode, anode, separator, and electrolyte. However, if the intermediates are very thin with very low crystallinity, the existing in-situ XRD cannot complete the detection, in addition, it is not possible to detect the local sites of the catalyst. In-Situ X-Ray Absorption Spectroscopy (XAS) XAS (also name as X-ray absorption fine structure, XAFS), is a form of inelastic scattering, including X-ray absorption near-edge structure (XANES) (with a region up to ~ 25 eV above the edge), and extended X-ray absorption fine structure (EXAFS) (with a region of higher energies) [48,79]. The former (XANES) can reveal the oxidation state and electronic configuration information. In detail, the target atom of a higher oxidation state, always with a higher effective charge of its nucleus, requires extra energy to excite its core electrons in comparison to other atoms [82]. The latter (EXAFS) is used to supply information on interatomic spacing or coordination numbers, which are attributed to the constructive or deconstructive interference caused by electron scattering, respectively [83]. Thus, in-situ XAS is regarded as a mighty tool for in-situ verification of electronic structure changes around the excited atom during ORR. As shown in Fig. 3b [84], an in-situ cell was made by Teflon material with chemical inert, in which the working electrode (WE) was constructed by spraying the catalyst layer (with thick of 40 μm) on carbon paper (200 μm). Also, it had an X-ray penetration window, and the electrolyte thickness was less than 200 μm, both ensuring highly quality data acquisition for in-situ XAS analysis. To reduce the IR drops induced by the thin electrolyte layer resistance, the reference electrode (RE) was linked to the cell via a salt bridge. Regrettably, it is unable to research low atomic number elements, despite it can in-situ characterize amorphous materials, also with a wider range of X-ray photon energies. 3 intensity of necessarily and randomly adsorbed intermediates in ORR, the development of surface-enhanced Raman spectroscopy (SERS) or surface enhanced resonance Raman scattering (SERRS) is essential, especially in-situ Raman analysis [87,88]. In-situ shell-separated nanoparticle-enhanced Raman spectroscopy (SHINERS) is the most in-depth research, particularly, revealing for ORR mechanism of low-index Pt (hkl) surface on Au@ SiO 2 nano-particles (NPs) [89]. In detail, Au nanoparticles act as cores, for amplifying signals; thin silica with pinhole-free acts as shells, for separating the target molecules from the core, thereby eliminating the interrelated effects (Fig. 3c). Even in aqueous solutions (weak scatterers), the vibrational and rotational energy levels of intermediates on the Pt surface can be acquired in real time. Nevertheless, Raman spectroscopy is not detectable in certain catalytic devices, such as pure metal catalysts. In-Situ Fourier Transform Infrared (FT-IR) Spectroscopy FT-IR spectroscopy, as an absorption spectrum, can check diversification in dipole moments triggered by rotations and vibrations of bonds in fragment molecules, radicals, and functional groups [90]. In-situ FT-IR allows real-time tracking of groups generated on the catalyst surface during ORR, with the frequencies or intensities in spectrum interrelated with category or homologous substance quantity, respectively [91]. Prominently, in-situ attenuated total reflection infrared (ATR-IR) spectroscopy with better spectral reproducibility focuses on measuring IR radiation variations after contacting with intermediates in real time, providing further insight into the ORR mechanism [92]. To avoid mass transport limitations during ORR, Nayak et al. [93] developed a modified cell, with a flow field of electrolyte around catalysts, to proceed in-situ ATR-IR testing, its work mechanism likes that of a rotating disk electrode (Fig. 3d). Regretfully, in the aqueous phase experiments, the IR light is sensitive to H 2 O molecules, making the signal of in-situ FT-IR exceedingly weak. In-Situ Transmission Electron Microscope (TEM) TEM is another intuitive means to observe the nano-materials morphology [94]. Also, it can be applied in the authentication of the crystal structure to gain information on interplanar spacing, owing to its electron diffracted beams being related to the atomic structure of the material [45]. Thus, insitu TEM can monitor morphological variations of catalysts during ORR in real time, and it can also realize the simultaneous dynamic monitoring of chemical reactions. Notably, in-situ aberration-corrected scanning/transmission electron microscopy (S/TEM) allows real-time characterization of material structures through quantum mechanical interactions induced by incident electron wave fields and atomic potentials [95]. There exist two imagery models, brightfield imagery, and high-angle annular dark-field (HAADF) imaging, particularly, the latter as the main one can in-situ exposit structure information from the perspective of atoms. Moreover, this in-situ test is always equipped with electron energy loss spectroscopy (EELS) and energy dispersive X-ray spectroscopy (EDX) to facilitate the provision of elemental, component, and electronic configuration information of the catalyst. More importantly, it is also possible to further introduce optical, electrical, and thermal signals into the cell to undergo this in-situ test. For example, Gong et al. [96] added a heat source into the cell to ensure in-situ observation of the ORR at variable temperature conditions (Fig. 4a); also, in-situ HAADF and elemental distributions were amassed from the relevant detectors by rotating the tomography holder under the scope of tilt angles (Fig. 4b). However, in-situ TEM places relatively stringent requirements on the material preparation and operating environment, especially for high-resolution measurements under the liquid solution. In-Situ Atomic Force Microscopy (AFM) AFM, based on force, is a typical surface scanning technique, also known as scanning probe microscopy (SPM) [86]. For the main application in ORR of metal-O 2 / air battery, in-situ AFM, especially contact mode, can trace the morphological varies of the catalyst surface/ interface in real time. Namely, it can in-situ scan the vertical morphology of the catalyst surface using a constant repulsive force between the probe and surface [90]. Notably, it is appropriate to a wide range of materials, independent of the electrical conductivity, and can work in a liquid solution due to there being no beam or lenses. Even though, to enhance the stable contact of probe and surface, it is crucial to ensure in-situ cell with an undisturbed liquid environment during ORR. That is, it is better to start in-situ AFM detection after the electrolyte saturating with O 2 [97,98]. The challenges for in-situ AFM are the contacting way of the cantilever beam with the electrode, and the inertness of the sample on the electrode. In-Situ Scanning Electrochemical Microscopy (SECM) SECM, as another SPM technique, is based on current and can offer electrochemical imaging information for a wide range of catalyst surface/interface with insulating/conducting [90]. It does not require surface contact (unlike AFM) and consists of three operating modes: generation, acquisition, and feedback, with the last one being the most used. Recently, in-situ SECM is widely applied to elucidate the active site number and catalytic kinetic rate of the catalyst for ORR and it is conducted by connecting a workstation bipotentiostat. Typically, in-situ SECM cell (Fig. 4c) contains aligned tip (for generating titrant) and substrate (for distributing catalyst) ultramicroelectrodes (UMEs), also, the inter-electrode distance is always controlled in less than 5 μm [99,100]. When the titrant detects the active intermediate formed on the catalyst surface quantitatively and sensibly at given potentials in real time, the active site number information can be offered. When a delay time (t delay ) is adopted between tip and given potential on the substrate (i.e., time-delay titration), the residual active sites can be in-situ quantified, further the kinetic rate information can be obtained. In the work of Li et al. [101], the Cu(I) was easily oxidated by FcMeOH + (tip-generated titrant) rather than O 2 , during Cu SAC catalytic ORR, supporting in-situ timedelay titration test, and more kinetic information derived 1 3 from the log-integrated charge density (ln [Cu(I)]) against t delay (Fig. 4d). However, the nanoscale resolution for SECM is challenging, being limited by the tip size and its distance to the substrate. In-Situ Electrical Transport Spectroscopy (ETS) ETS is an electrochemical interface analysis technique, based on the fact that the chemical sensitivity of metal nanocatalysts can be converted to electrical signals [102]. For in-situ ETS, cyclic voltammetry (CV) and ETS are simultaneously measured in a three-electrode system in real time by a dual-channel source measure unit (SMU). More precisely (Fig. 4e), one supplies gate voltage (V G ) for the working electrode (WE) and reference electrode (RE), while the Faradaic current (I G , in line with CV electrochemical current) is collected through counter electrode (CE). The other applies a petty bias voltage (V SD ) for the source/drain electrode (i.e., WE), simultaneously, the electrical conduction current (I SD ) is collected, namely, ETS signals. For ultrafine metallic nano-catalysts [103], the solution anions tend to adsorb on the inner Helmholtz plane (IHP, among solid and liquid interfaces), with difficult access for the radiation, making in-situ ETS essential for monitoring their surface adsorbed behavior. Taking platinum nanowire (PtNW) as an example, in-situ ETS provided a semaphore that was extremely susceptive to the dynamic evolution of surface states during ORR, thus revealing the effect of surface anions adsorption on ORR. In the cell (Fig. 4f), the PtNW was placed among source and drain electrodes with gold protection, also it only responded to the strong scattering effect of adsorbed anions, thus ensuring easy detection by the current. Nevertheless, for some catalysts only with weak chemical sensitivity, it is difficult for using in-situ ETS to track the related scattering effect signals on their surface. In-Situ Mössbauer Spectroscopy Mössbauer spectroscopy, based on the Mössbauer effect, is a technique for detecting the nuclear state of certain specific elements (e.g., Fe, Sn, Ru, and Au). In-situ Mössbauer spectroscopy can track nuclei states of these elements in real time, including chemical state, coordination symmetry, spin state, and magnetic moment [104,105]. Specifically, the isomer shift (IS) parameter, interrelated with the electronic structure, can ascertain the information about the chemical shifts. The quadrupole splitting (QS) parameter can reveal the electronic symmetry, further reflecting the spin state. The magnetic Zeeman splitting (B) parameter is invariably used to furnish the magnetic structure information [105,106]. Recently, in-situ 57 Fe Mössbauer spectroscopy has been widely used to reveal dynamic evolutions of spin state and chemical shift of Fe active site in Fe-N-C catalyst during ORR [107,108]. Notably, to avoid excessive attenuation of γ-rays, the width for the electrode chamber of in-situ cell should be limited to a smaller size. For example, the compartment was restricted in a plastic cavity by Li et al., with a width of less than 5 mm (Fig. 4g) [109]. Unfortunately, only a limited number of nuclei have the Mössbauer effect, and the experimental conditions are harsh. In summary, in-situ characterizations can ensure the real-time detection of catalytic process as much as possible, thereby deepening our understanding of the ORR mechanism. Also, these are very helpful to clarify relationship between the structure of the catalyst and ORR activity, further guiding the reasonable design of catalyst. Notably, each in-situ characterization technique exhibits its unique characteristic, as summarized in Table 1, and shares complementary advantages from each other. An appropriate in-situ technique should be selected based on its characteristics that match the requirements of the target catalyst. In the subsequent section, we will outline the various applications of in-situ techniques for monitoring dynamic evolution of the ORR process from multiplex perspectives. In-Situ Monitoring the Dynamic Evolution of Catalysts during ORR In this section, we summarize the applications of multiple in-situ characterization techniques for direct monitoring of catalyst dynamic evolution during ORR (Fig. 5, left part). This in-situ revelation can help to elucidate the active site, simultaneously help to guide the synthesis of the catalyst. That is, it can help to realize the reasonable design of catalyst from the source, which in turn promotes the efficient ORR. Notably, the potential values are all relative to the reversible hydrogen electrode (RHE) in the following each section, which will not be repeated later. Monitoring of the Phase Transitions Dynamic detection of crystal phase changes for the catalyst during ORR helps to ascertain active site with efficient catalytic effect in the reaction. It is also useful to identify the optimal structure of catalyst with higher activity, and provides a basis for further adjusting the structure design. 3 Notably, the ADT is the abbreviation of accelerated durability testing in this part and will not be repeated. The correlated works are as follows. The phase transitions also often occur in the catalyst surface coverings, more notably in the case of rare earth Pt alloy catalysts (i.e., PtxY and PtxGd). Escudero-Escribano et al. [110] investigated the phase transitions of active Pt overlayer on the Gd/Pt (111) single crystalline electrode during ORR in PEMFC by using in-situ synchrotron grazing incidence XRD (GI-XRD). For Gd/Pt (111) R 0 (i.e., the alloy phase on the crystal with no rotate), in-situ GI-XRD revealed that the compressive strain in Pt overlayer displayed slight relaxation as the cycling gradually entering the relevant region of ORR (i.e., 0.6-1.0 V). After 3000 cycles, the structure of Pt overlayer tended to be stable, and persisted in about 0.8% compressive strain (8.3 Å) after 8000 cycles. This confirmed that the active phase was a compressed Pt overlayer formed on the Gd/Pt (111) electrode, with a relatively stable structure during ORR long cycling. For Gd/Pt (111) R 30 (with 30° rotation), no structure varies in Pt overlayer were observed in-situ GI-XRD as the cycling from open-circuit potential to 1.0 V. After the cycling potential increasing above 1.2 V, the strain in Pt overlayer relaxed strongly, and up to 1.3 V, its crystalline part thickness increased. This could be attributed to the oxidation of Pt and the leaching of Gd, leading to Pt overlayer becoming rough and loosely thickened. This indicated that the phase transitions of the Pt overlayer were related to the cyclic potential, and the higher potential leaded to its degradation. Thus, this work indicates that Pt overlayer with a compressed strain can act as the main activity site for ORR, also determine the excellent ORR stability, further providing a basis for the construction of catalytic layers on the catalyst surface. Severe γ-rays attenuation in a cell with wide size Control the chamber of in-situ cell in a relatively small size Interestingly, catalysts can not only catalyze phase transitions in ORR, but also participate in phase transitions in reaction. As an example, Gao et al. [111] discovered the phase transitions of LiCoO 2 (LCO) catalyst by in-situ XRD/ Raman and revealed its self-reinforcing effect as a catalyst for Li-O 2 battery (LOB). In detail, highly ordered LCO as the ORR catalyst exhibited splendid activity during discharge (i.e., ORR), with no phase changes, also accompanied by the accumulation of Li 2 O 2 products. During charging, the LCO underwent phase change by the extraction of Li + and transformed to LixCoO 2 phase (i.e., Li 0.6 CoO 2 (L0.6CO)). As shown in Fig. 6a, Eg bending (O-Co-O) and A1g stretching (Co-O) vibrations of LCO gradually became weaker and changed to A1g stretching (Co-O) vibration of Li 1−x CoO 2 . Then the formation of L0.6CO induced Li/oxygen vacancy and Co 4+ , which damaged the symmetry of CoO 6 octahedron, further boosting the oxygen evolution reaction (OER). Synchronously, the L0.6CO reverted to the original phase (LCO), which continued to promote the next ORR. In summary, in-situ analysts of this work revealed that LiCoO 2 could achieve self-adjustment among LCO and L0.6CO, thus performing higher ORR and OER performance. Thus, the phase changes of LCO catalyst induced by the intercalation or extraction of Li + do exist in ORR, which can modulate the torsional deformation and recovery of the CoO 6 octahedron, (i.e., active sites), further adjusting the catalysis activity to the best. Also, in-depth research on the self-reinforcing catalysts can bring up new ideas for the design of LOB catalysts. Notably, the monitoring of phase transitions is an essential descriptor for guiding the reasonable design of ordering degrees for the bimetallic nano-catalyst (BN). For example, Gong et al. [112] used in-situ XAS to probe the varies in local and electronic structure of the fcc-phased PtFe ordered alloy (O-PtFe) catalyst during 10 k cycles ADT for ORR in PEMFC. The Pt L 3 -edge XANES exhibited that the O-PtFe was mainly in the metal state (Pt 0 ), and with a slight negative shift, proving that Pt was reduced to form an alloy with Fe. This was consistent with the fact in EXAFS that its Pt-Pt bond length (2.708 Å) was shorter than that of pure Pt (2.775 Å). The smaller coordination number (CN) of Fe (ca. 2.6) than Pt (ca. 7) further offered evidence that the O-PtFe surface was Pt-rich. These testified that the ultrathin Pt shell phase formed on the surface of O-PtFe, as the main activity site, for the splendid ORR activity (0.68 A mg −1 Pt ) than that of the disordered PtFe alloy (D-PtFe). In-situ Pt L 3 -edge XANES and EXAFS revealed that the ordered structure of O-PtFe with the ultrathin Pt shell was largely preserved, both in peak position and intensity. While the Fe K-edge XANES spectra showed a slight decrease in the surface Fe leaching rate, proving that the ultrathin Pt shell layer ensured the chemical stability of O-PtFe to Fe leaching. Thus, the monitoring of phase transition indicates that the O-PtFe can exhibit excellent ORR activity and stability than D-PtFe due to the formed ultrathin and stable Pt shell phase. Namely, improving the ordering degree is a good method to modify BN, for higher ORR performance. The role of this descriptor (i.e., the monitoring of phase transition) for the BN is further applied in the N-doped bimetallic nano-catalyst (N-doped BN). Zhao et al. [113] exploited the local and electronic structure changes of the N-doped rhombohedral ordered PtCu catalyst loaded on Ketjenblack (KB) (i.e., Int-PtCuN/KB) during ORR in PEMFC by in-situ XAS. In detail, some weak oxidations of Cu in Int-PtCuN/KB were proved by in-situ Cu K-edge XANES ( Fig. 6b) with white line exhibiting slight positive shift as potential increasing, but the varies were enough less than Int-PtCu/KB (without N) and D-PtCuN/KB (disordered). The results testified that the more ordered structure and N dopants in PtCu BN were favorable for increasing the intrinsic corrosion resistance, and promoting the formation of Cu-N bond to further improve the corrosion resistance, respectively. In-situ Pt L 3 -edge XANES of Int-PtCuN/KB (Fig. 6c) reflected a little enhanced main peak (white line) along with increased potential, which was induced by some Pt oxide, in line with EXAFS analysis (i.e., with a weak Pt oxide phase peak at 1.6 Å). Compared to Int-PtCu/KB and D-PtCuN/KB, the little varies of Pt oxide phase transitions revealed that the Pt oxidation on its surfaces was prohibited, contrarily, more effective Pt active sites were exposed. This confirmed that the Pt monolayer shell was formed on Int-PtCu/KB surface, also effectively alleviating the corrosion of Cu. Further, the distance of Pt-Pt bond was 2.693 Å in Int-PtCuN/KB, which is shorter than 2.701 Å (IntPtCu/KB). This meant the presence of N dopants introduced compressive strain on the Pt surface in BN, as the main activity site, to modulate the intermediates adsorption, thus improving ORR activity. This work offers worthy insights to the design of the BN that is to ensure the synergistical effect of ordered structure with N-doping on ORR. These conclusions declare that the correlation between the catalyst nanostructure and its ORR performance can be revealed by in-situ monitoring of and the phase transitions of the surface coverings and catalyst itself. In detail, the catalyst with little Pt overlayer [110], or higher orderliness [112,113], dopant [113], etc. are beneficial to the ORR performance. Also, the direct changes in electronic structure, and lattice parameter, are more intuitive to show that the defect and strain are favorable for catalyst activity enhancement. Monitoring of the Valence Variations For some special catalyst with complex valence, mainly transition metal-based catalyst, the determination of its effective active center depends on the in-situ dynamic monitoring of the chemical valence states during ORR. Thus, the more studies focus on it, and some relative works will be summarized in this part. For the typical single transition metal-based catalyst, it often undergoes valence changes during its catalytic process. Such as Qin et al. [114] explored the effect of the valence of Co in a polypyrrole (PPy)-revised carbonloaded metal Co catalyst (Co-PPy-BP) towards ORR in DBFC by in-situ XAFS and XRD. Firstly, in-situ XANES of Co-PPy-BP evolved into standard Co (OH) 2 , and then into CoOOH, which implied that the valence of Co underwent Co 0 → Co 2+ → Co 3+ during ORR. Namely, both Co 0 / Co 2+ and Co 2+ /Co 3+ existed in Co-PPy-BP, and as the active centers for the higher ORR activity. Moreover, in-situ XRD exhibited that two Co crystal peaks (15.7° and 20.1°) were observed at first and gradually disappeared during ORR; finally, two new peaks emerged at 15.2° and 21.61° attributed to CoOOH under a maximum current. This further confirmed that Co (OH) 2 was largely an intermediate phase, and the lower valence of Co could ensure a richer redox transition, thus as the main active center for favoring the ORR activity. Namely, this work provides a guidance that single transition metal-based catalyst with the lower initial valence is pivotal to ensure the rich redox transition during ORR. For the dual transition metal-based catalyst, especially for transition metal oxide, in-situ monitoring of valence changes is also key for identifying the effective active center, further revealing the synergistic catalytic effect of dual transition metals on ORR. For example, Yang et al. [84] utilized in-situ XANES to pursue the valence varies of Co and Mn in the synthesized Co 1.5 Mn 1.5 O 4 /C bimetallic catalyst during ORR in PEMFC, and explored the synergistic interaction of Co and Mn. In-situ XANES of Mn K-edge indicated that the Mn valence existed in a lower value during ORR at more negative potentials. Also, linear combination fitting (LCF) of the Mn valence revealed that its average valence decreased from 3.15 to 2.91 as the potential decreased (1.15-0.4 V). Further the fact that Mn (III, IV) largely converted to Mn (II, III), indicating that various Mn could do duty for active centers for ORR. In-situ XANES and LCF of Co K-edge illustrated 1 3 that the average valence of Co also decreased (2.75 → 2.57), namely, large numbers of Co (III) converted to Co (II), as the potential decreased. This change was synchronized with Mn, and thus Co and Mn were considered as co-active centers for ORR. Under the non-steady state, the valence states of Mn and Co in Co 1.5 Mn 1.5 O 4 /C changed periodically along with cyclic potential and synchronized with each other. As shown in Fig. 6d, the relative X-ray intensities (ln(I 1 /I 2 ), I 1 as incident, I 2 as transmitted) of Co and Mn changed from minimum (higher valence Co, Mn, at 1.25 V) to maximum (lower valence Co, Mn, at 0.42 V), as potentials declining from 1.4 V (upper limit) to 0.3 V (lower limit). The actual boundary potential of 0.42/1.25 V corresponded to the oxidation/reduction currents for the cyclic voltammetry (CV) (Fig. 6d, inset). That was, Mn (III, IV) → Mn (II, III) and Co (III) → Co (II) occurred at the same time, which further indicated that Co and Mn had a synergistic catalytic mechanism for ORR. Notably, for a single transition metal-based catalyst (e.g., the above work), lower initial valence for the transition metal can ensure excellent ORR activity. For dual transition metal oxide, two high-valence transition metals can act as the bridges for e transfer when O 2 is reduced to form H 2 O, which is more favorable to ensure the synergistic effect on efficient ORR. There is also a synergistic effect of two transition metals in the transition metal-based SAC, and this effect can also be reflected by the valence change of the contained metals. Recently, Tong et al. [115] monitored the changes in the electronic structures of Zn and Cu in Cu/Zn bimetallic single atoms complexed with nitrogen-doped carbon (Cu/Zn-NC) catalyst during ORR in PEMFC through insitu XAS. In-situ Zn K-edge XANES spectrum ( Fig. 6e) showed that the main peak occurred a slight positive shift, owing to the electron transform (Zn to Cu) and intermediates adsorption. Notably, the higher valence than + 2 of Zn in Cu/Zn-NC was another evidence of its electron transfer to Cu. However, neither the length nor ligand number of Zn-N bond occurred changes, indicating that Zn was not the active center. In contrast, in-situ Cu K-edge XANES spectrum (Fig. 6f) showed that the main peak underwent a distinct positive shift as potential decreasing, then its energy decreased obviously at a potential of 0.4-0.3 V, and finally, it returned to initial state only with a little positive shift. In-situ Cu K-edge EXAFS spectrum (Fig. 6g) further revealed that the Cu-N bond (~ 1.5 Å) gradually weakened as potential decreasing, while a new Cu-Cu bond (~ 2.2 Å) generated at a potential of 0.4-0.3 V and disappeared after the reaction. This indicated that Cu single atoms (derived from Cu-N 4 ) aggregated into Cu clusters driven by an external electric field, and finally returned to initial state (i.e., isolated and dispersed). In conclusion, in-situ results showed that Cu-N 4 was the main active center for efficient ORR, and during the catalytic process, it changed from atomic dispersion (with Cu-N bond) to cluster (with Cu-Cu bond), with the cooperation of Zn-N 4 , and finally returned to a single atom state (with Cu-Cu bond disappearing). Notably, the cooperation effect of Zn-N 4 referred to the electron transform of Zn to Cu regulating the Cu 2+ /Zn 2+ ratio in Cu/Zn-NC, which further drove the high activity. Thus, this work opens a new insight into understanding of synergy mechanism, which can guide the rational design of dual transition metal-based SAC. That is, the synthesized catalyst contains the special transition metal pairs, which can ensure electron transfer from one transition metal to another. For the transition metal single-atom site (SAS) catalyst, it is also necessary to monitor their valence changes during ORR and thus clarify the active center composition. For example, Han et al. [116] utilized in-situ XAS to pursue the valence changes of Mn-based SAS catalyst with Mn-N 4 structure (Mn-SAS/CN) during ORR in Zn-Air battery. In in-situ XANES spectrum of Fig. 6h, both the absorption edge and the main peak occurred positive shift under open-circuit, compared to ex-situ condition. Also, both shifted to lower energy as potential decreased, corresponding to an increase in ORR overpotential. Notably, both could shift back, as potential reversed (Fig. 6i). These indicated that more Mn sites were reduced to lower valence states as overpotential increased, correspondingly, their surface OH adsorption decreased while OH − desorption accelerated. Namely, low-valent Mn L+ -N 4 was identified as the active center, which easily promoted electron transfer to *OH and facilitated *OH desorption (corresponding the transformation of OH ads -Mn H+ -N 4 / Mn L+ -N 4 ), further exhibiting efficient ORR. Thus, this work provides evidence that the ORR performance of SAS catalyst is correlated with the valence state of the activity center, and it offers an idea for the design of a highly active SAS catalyst through a coordination modulation strategy. This part reveals that in-situ XAS technique is crucial to the disclosure the valence information in the activity site of the catalyst during ORR. In detail, this technique can effectively monitor the dynamic changes of the electronic structure and partial atomic coordination for the transition metal-based catalyst active center in the working environment. The obtained information can reveal that different catalysts have different optimal valence states contributing to higher catalytic activity. As summarized above, the low valence state makes more contribution in M-N-C like catalysts [114,116], while high valence is better for applying in transition metal oxides [84]. Moreover, the synergistic effect often existed in dual transition metal systems, with the electron transfer between them [115]. Thus, the design of the catalyst structure should be tailored to these properties. Monitoring of the Morphological Evolutions Not only does the phase and valence state affect the catalytic activity of the catalyst, but also the morphology is an important factor. Also, structural and valence variations are frequently accompanied by morphological changes, thus the development of various in-situ imaging techniques is crucial. There are several types of researches for investigating microstructural transitions of the catalyst at different scales and high temporal resolution during the synthesis process, as reviewed below. Many in-situ morphological monitoring efforts have mustered on the application of synthesis process for the ORR catalysts. This is crucial to reveal the synthesis mechanism, which can help to offer guidance for optimizing the synthesis strategy, and further synthesizing high activity catalysts. As an example, Ma et al. [117] utilized in-situ TEM to view the oriented attachment process of Pt nanoparticles (NPs) on the (100) lattice planes in the synthesis of one-dimensional (1D) Pt-based nanowires (NWs) catalysts by hydrogen assisted solid-phase method. In detail (Fig. 7a), the intermediate products were collected for in-situ monitor at different reaction time during the synthetic process. At 20 s, three Pt NPs were in a different orientation, with Pt (111) fringes as reference. At 80 s, the upper Pt NP rotated to turn its (100) lattice plane toward the middle Pt NP, then approached at 160 s, attached at 240 s, and coalesced at 320 s. The same process was done for the bottom Pt NPs. Notably, the growth kinetics, including rotation, approach, attachment, and coalescence, of the NPs during the synthesis were related to the Pt surface modifications. Concretely (Fig. 7b), H 2 preferentially adsorbed on the (100) lattice plane of Pt NP (upper and bottom), forming activated Pt-H complexes. Then special Pt-H ensured a lower diffusion potential barrier, accelerating the local diffusion rate of the modified (100) lattice plane. Finally, the attachment and coalescence tended to occur in this specific plane. In conclusion, the synthesis mechanism of 1D Pt-based NWs was that the oriented attachment of solid-state Pt NP with aid of the metal surface diffusion, which was induced by the adsorption modification of hydrogen molecules. Thus, this work further provides the scalable insight for the preparation of 1D Pt-based NWs with excellent activity for catalytic ORR in PEMFC. For the Pt-based intermetallic catalyst, in-situ visualization of the morphological changes during the ordered process is essential to reveal the mechanism of its ordering, further helping to reconstruct its structure. Gong et al. [96] used three-dimensional (3D) tomography assisted in-situ STEM to monitor the morphological changes of the Pt-Cu nanoframe (NF) catalyst during the ordering process under in-situ heating conditions. Concretely (Fig. 7c), under 300 °C, the single Pt-Cu NF had no deformation, owing to no interatomic migration and rearrangement; but a slight Pt segregation formed on the surface induced by the high surface energy of nanomaterials. Heating to 500 °C, the NF occurred a slight shrinkage, especially, since its concave part shrank/ clustered rapidly, gradually forming an octopod structure. Next to 600 °C, the NF exhibited the most evident octopod structure, with uniformly dispersed Pt and Cu on its surface, also, with a formed Pt skin. Final to 700 °C, the NF entirely evolved into solid nanoparticle (NP), also with a formed Pt-Cu@Pt (i.e., Pt-Cu wrapped by Pt) core-shell structure. Notably, at 660 °C, the transient-state NP suddenly changed into a molten state, at which the atoms migrated rapidly and rearranged into the ordered phase. This ordered process improved the resistance of NF to Cu leaching, also boosting the stability of the Pt-Cu NF catalyst for ORR in PEMFC. In conclusion, high temperature treatment could reduce atomic migration barrier, accelerate atomic diffusion, further promoting the ordering of NF. However, higher temperatures could induce overmuch atomic migration, further leading to the collapse and aggregation of NF structure. Thus, a reasonable temperature selection is conducive to the preparation of a highly active Pt-based intermetallic NF catalyst. Also, it is necessary to exploit the more available method to control the rate of atomic diffusion thereby realizing the controllable synthesis of the ordering Pt-based intermetallic catalyst. For the Pt-M NP catalyst, considering that its catalytic activity is ascertained by the chemical structure and composition of the outmost atomic layer, it is crucial to in-situ monitor the dynamic morphological changes of them during the synthesis process for devising more efficient ORR catalyst. Gocyla et al. [118] adopted in-situ TEM to observe the morphological evolutions of outmost atomic layer for the octahedral PtNi 1.5 nanoparticle (PtNi 1.5 NP) catalyst in in-situ heating synthesis process. Specifically, at initial 50 °C, the PtNi 1.5 NP had the concave octahedral structure, with rich Pt edges and rich Ni facets (Fig. 7d). Heating to 200 °C, the octahedra were partially filled with the Pt {111} [100] facets, as Pt diffused from edges to the {111} facets. Next, a truncated octahedron gradually formed, with flat Pt {111} facets and additional Pt {100} facets, as Pt continued to diffuse. Up to 300 °C, a cuboctahedron with thin Pt shell gradually formed, during which the diffusion process of Pt continuously proceeds, promoting the further truncation, and increasing the Pt {100} surface area. Then, the cuboctahedra with a gradual loss of flat Pt {111} facets, exhibited in rounding in partial facets at 400 °C, and eventually forming the spherical NP at 500 °C. In summary, the morphological evolution sequence of the PtNi 1.5 NP followed concave octahedra, octahedra partially with Pt {111} facets (further to truncated octahedron), cuboctahedra, cuboctahedra with partial rounding facets, and spherical NP. Notably, the flat Pt {111} facet could act as the main component for a thin Pt shell, further boosting the catalytic activity. Thus, this work proves that optimizing the ratio of Pt {111} and Pt {100} facets, and adjusting an appropriate thickness of Pt shell can provide synthetic possibilities for the synthesis of highly stable and active Pt-M NP catalyst for ORR in PEMFCs. Similar research is the work of Dai et al. [119], they also used in-situ TEM to check into the surface composition of the disordered Pt 3 Co NP, and the morphological dynamic changes of its surface during in-situ heating process. Specifically, the Co segregation occurred on Pt {111} rather than on Pt {100} surface. Then, a few Co oxidation to form CoO layer occurred on the outermost Pt {111} surface, blocking the exposure of the underlying Pt in O 2 , and preventing its diffusion and reconstruction. In contrast, the Pt {100} surface possessed oxidation resistance, as a pivotal role to preserve the ORR activity of the disordered Pt 3 Co NPs catalyst. In brief, the Pt {100} surface was the main active composition for the disordered Pt 3 Co NP. Compare to the above work of PtNi 1.5 NP, it could be concluded that the main active composition for the Pt-M NP catalyst was relative with the element (Ni, or Co, etc.) alloyed with Pt. Thus, the real time monitoring of dynamic evolutions of surface composition and morphology can provide precise information about the active site during in-situ heating synthesis process, further providing a new light (with a rich Pt surface) on the design of the Pt-M NP catalyst. For the non-platinum metal-based catalyst, such as ZIFderived catalyst with high activity in fuel cell, the actual pyrolysis process is also usually simulated under in-situ TEM heating test to reveal the relationship of morphological changes and catalytic activity. Wang et al. [120] applied ZIF-67 as a matrix to investigate the effect of temperature on the microstructure evolutions of derived catalyst at every stage of pyrolysis by in-situ TEM. In detail, ZIF-67 remained initial structure at 300 °C, while its ordered structure collapsed and turned to disorder at 442 °C. Then, its unit cell (i.e., CoN 4 tetrahedron) gradually decomposed at 500 °C, resulting in N loss and Co precipitation. Next, a new Co@N-C structure gradually formed at 550 °C, along with a hierarchical and 3D connected porous structure in carbon matrix, owing to the precipitated Co nanoparticles (NP) grew and catalyzed carbon graphitization. Finally, the matrix eventually occurred carbonization at 800 °C, due to the further growth to large one of Co NP, and further loss of N. Notably, the ZIF-derived Co@N-C catalyst exhibited the excellent ORR activity in PEMFC. The results proved that it was necessary to ensure the suitable N content, graphitization degree, and hierarchical and 3D connected porous structure, for efficient ZIF-derived catalyst. Moreover, the synergistic effect among Co single atom and Co encapsulated by less layer graphene, was also the key factor for its enhanced ORR activity. Thus, this work provides strong evidence that the ZIF-derived catalyst, prepared at relatively low temperatures, has higher performance, also with the energy-saving advantage. In summary, in-situ TEM can directly observe the morphology varies of the catalyst itself during the synthesis process in real time, further providing some crucial information for guiding the design of more efficient ORR catalyst. Specifically, the special way, such as the hydrogen assisted solid-phase method, is essential for the synthesis of the special morphology catalyst. The rational pyrolysis temperature is proven to be important for PtM NP catalyst, which can adjust the crystal face ratio or enrich the Pt skin for high activity NP; also, it is essential for regulating N content, pore structure and graphitization for M-N-C catalyst. Monitoring of Electronic Structure Many works have confirmed that the maximum exposure of active sites and the increase of intrinsic activity in each active site are key tactics to elevate the ORR activity of catalyst. Although massive progress has been acquired in the design of catalysts through the two strategies, the changes in the electronic structure of the active site remain controversial. In this part, the progress of in-situ techniques applied to identify the related changes is reviewed. Typically, the activity of the bimetallic catalyst often exceeds that of any single metal, but the real activity site of the catalyst for its enhanced performance always remains unclear. Considering that the real activity site is always related to the electron transfer between the two metals, it is essential to monitor the electron structure changes of the catalyst. Such as, to ascertain the active site of the CuAg catalyst, Gibbons et al. [121] monitored its electron transfer behavior via in-situ XAS during ORR. In-situ Ag L 3 edge XANES of CuAg ( Fig. 8a) (compare with Ag) showed that the main peak (~ 3353 eV), owing to Ag 2p 3/2 electron being excited to its first non-occupied level, was unaffected by Cu, with almost no electronic changes, at 0.75 V. Other peaks (~ 3369, 3377, and 3397 eV) varied in intensity and position with or without Cu, which were caused by Cu scattering rather than Ag electronic state, thus further indirectly proving that the electronic state of Ag in CuAg with no changes. The relevant in-situ EXAFS of CuAg (Fig. 8b) showed that the first-shell peak (~ 2.9 Å) had no tensile strain, and with no peak of Ag-Cu interaction, only with a decrease of the extended peak (~ 5 Å) due to Cu-induced lattice deformation, at 0.75 V. This meant that there was no obvious change in Ag geometric state of CuAg. Conversely, in-situ Cu K-edge XANES of CuAg ( Fig. 8c) (compare with Cu) showed that the edge peak shifted by a few eV, at 0.75 V. The XANES with Cu standard as reference (Fig. 8d) showed that the Cu peak in CuAg was like a mixing of metal Cu 0 and 1 3 Cu + oxide, while the Cu peak in pure Cu was like standard Cu 2+ oxide, thus Cu in CuAg existed in a more reduced state. Thus, the electronic state and local bonding of Ag in CuAg remained unchanged, while the electronic state of Cu changed dramatically, thus the high activity of CuAg catalyst was due to electronic rather than geometric effect. That is, it was more useful to create a Cu-center active site than to add an Ag-center active site for high activity CuAg bimetallic catalyst for ORR. This gives an idea of the identity of the real active site of the bimetallic catalyst. When it comes to specifying the number of active sites in bimetallic catalyst, in-situ SECM often plays an important role and is to be used for quantification. For example, Li et al. [122] adopted in-situ surface-interrogation SECM (Fig. 8f, inset) to quantify active sites of the CoFe-PPy catalyst (Co and Fe highly distributed on polypyrrole (PPy) Fe during ORR under various potentials; the OCV was open-circuit voltage, the AFT was after ORR [109]; Reused with approval; Copyright 2020 Elsevier derived C), revealing the reason for its high activity. In-situ SI-SECM titration curves (Fig. 8e) of catalysts all showed obvious growth in feedback currents as substrate potentials in more negative, indicating that active intermediates gradually formed and increased. Then, the plots of site densities against applied potentials (Fig. 8f) revealed that, for CoFe-PPy, active intermediates formed at the lowest overpotential, also with the highest site density (40.68) in it than Co-PPy (28.2) and Fe-PPy (21.57) at 0.8 V. Moreover, the catalytic kinetics quantified by in-situ SI-SECM exhibited that, for CoFe-PPy, the binding rate to O 2 was much faster than Co-PPy and Fe-PPy. Consequently, the bimetallic CoFe-PPy was testified to have the largest number of active sites, and each site with higher intrinsic activity, so it has the excellent activity. Thus, this work gives high-resolution proof to quantify not only the active site density, but also the kinetic rate, for the bimetallic catalyst. For Me n+ /Me (n+1)+ series of catalyst, the catalytic OOR process is often accompanied by electron transfer behavior, as mentioned in Sect. 4.2. However, there are exception, and it is more important for such exception to identify their active sites. For example, in the work of Wang et al. [123], the polypyrrole-based carbon-loaded cobalt oxyhydroxide (CoOOH-PPy-BP) catalyst did not undergo any electron transfer in the catalytic ORR of DBFC. Considering that the active site was not the conventional Co n+ /Co (n+1)+ , they used in-situ XRD and XAS to confirm the real effective active site. The results showed that no phase transitions or valence varies were recognized during the reaction. Contrarily, oxygen vacancy was markedly detected by the Fourier transformed k 2 -weighted EXAFS function analysis. Additionally, they proposed a new ORR mechanism, namely, the electron hole arising from oxygen vacancy captured electron from the anode further forming [Co 3+ + e], then, the adsorbed O 2 (on the vacancy) captured electron from [Co 3+ + e] and occurred in reduce. In conclusion, this work testifies that the oxygen vacancy can sometimes substitute Co n+ /Co (n+1)+ as the active site for the corresponding transition metal-based catalyst. Also, it lights a thought for designing catalysts with higher ORR activity by artificially introducing oxygen vacancy. Notably, for the M-N-C catalyst, especially for the Fe-N-C catalyst, the exploration of its active site has been recognized as the meaningful work. With the development of technique, the special in-situ 57 Fe Mössbauer spectroscopy become the most powerful monitoring method. Li et al. [124] adopted this technique to identify the real active site for Fe-N-C catalyst, and studied the contribution of different active sites to its catalytic activity and stability. In-situ 57 Fe Mössbauer spectra revealed that it contained high-spin (HS) S1 site (identified as HS FeN 4 C 12 ) and lowor medium-spin (L or MS) S2 site (identified as L or MS FeN 4 C 10 ) for contributing to the catalytic activity in Fe-N-C catalyst. In the subsequent reaction, S1 was degraded by conversion to iron oxide (III/II), namely a direct/indirect process of demetallation, in which the indirect route was initiated by the local oxidation of carbon surface or the protonation of basic N in S1. While S2 remained unchanged in structure and quantity, and still with obvious contribution for high ORR activity after 50 h of operation. That was due to S2 with more locally graphitized structure, less reactive oxygen species (ROS) product, or due to an active carbon top surface. In conclusion, there existed two active sites for Fe-N-C catalyst, both of which contributed to ORR activity in the early stage, while only S2 with contribution in late stage. Thus, this work indicated that the special in-situ 57 Fe Mössbauer technique did can act as the impactful way for ascertaining and tracing the active site for Fe-N-C catalyst during ORR in fuel cell. And it offers support that the S2 site, i.e., L or MS FeN 4 C 10 , is more significant for the development of Fe-N-C catalyst, which should be emphatically considered in the synthesis process. Similar, in-situ 57 Fe Mössbauer spectroscopy was also used by Li et al. [109] to identify the effective active sites and monitor their evolutions of Fe-NC (with four N-coordinated), FeNC-S0.2 (with 0.2 mL 1H-1,2,3-triazole), Fe-NC-S0.4 (0.4 mL), and Fe-NC-S (1.5 mL, with six N-coordinated) catalysts for the ORR of PEMFC. Taking Fe-NC-S with the highest ORR activity as an example, in-situ 57 Fe Mössbauer spectra (Fig. 8g) showed that three Fe electronic states all existed in each spectrum at different potential. In detail, the low-spin (LS) Fe 2+ (D1), mediumspin (MS) Fe 2+ (D2), and high-spin (HS) Fe 2+ (D3) were attributed in sequence to Fe II N 4 C 12 , Fe II N 4 C 10, and N-Fe I-I N 4 returned to HS. These results showed that there existed three activity sites and occurred dynamic cycle during ORR, also accompanied by the formation of some correlated intermediates. Thus, this work confirms the importance of in-situ Mössbauer technique, and points out that the dynamic cycle of the sites is crucial for the reaction process. That is, the design of the catalyst can focus on the integration of different sites. Moreover, for the modified Fe-N-C catalyst, such as with heteroatom doping, has shown the excellent ORR activity, also with different effective active site than conventional Fe-N-C catalyst. Thus, exploring the active site of the modified Fe-N-C catalyst is a necessary work, also can help to optimize idea for catalyst modification. Recently, Chen et al. [125] used in-situ 57 Fe Mössbauer spectroscopy to ascertain the effective active site of sulfur(S)-doped Fe 1 -NC catalyst for ORR in fuel cell. The results showed that the spin-polarized configuration of Fe 1 -NC occurred conversion with the adding of S in its second coordination sphere. And the active site of the S-doped Fe 1 -NC catalyst was identified as the low spin (LS) single-Fe 3+ -atom in C-FeN 4 -S part. Moreover, this LS Fe site could facilitate the desorption of OH*, further boosting the ORR activity. In conclusion, S-doping could improve the catalytic activity of the catalyst by modulating the spin state of the central Fe atom in Fe1-NC. This provides a new insight to clarify the effect of heteroatom on improving the ORR activity for the Fe-N-C catalyst. Also, this work can act as a reliable basic for the design of heteroatom doped Fe-N-C catalyst. In this part, advanced in-situ techniques have been shown to straightly observe the active site structure of the catalyst, particularly, even the spin state of Fe site in Fe-N-C like catalyst can be clarified by in-situ Mössbauer spectroscopy. In addition, the electronic structure and number of the active site in catalyst are proved to be related to its intrinsic property, thus, the detection of electron transfer can also promulgate the structure of the active site in reverse. More importantly, some active sites, such as vacancies, atom-dispersed active sites with certain spin states, heteroatom doping sites, have been identified with more contribute to the high ORR activity of the prepared catalyst. To sum up, the application of in-situ X-ray technique (XRD, XAS), electron technique (TEM, HRTEM), scanning probe technique (SECM), and Mössbauer spectroscopy can probe the phase, valence, morphology, and electronic structure varies of the catalysts in real time (Fig. 1). These are helpful to determine the active site of the catalyst and provide direction for the structural design of the catalyst. Specifically, the specific dynamic changes of the structure, morphology and electronic states of the catalyst corresponding to different in-situ techniques are depicted in Table 2. In addition, in-situ TEM monitoring of the morphology evolution is helpful to determine the reasonable synthesis conditions and to better synthesize the catalyst from the source. It has a spatial resolution of up to 0.1 nm, unlike the other two in-situ scanning probe technique techniques (SECM mentioned above; AFM to be mentioned soon) with morphological characterization capabilities. The uniqueness of each technique is also reaffirmed as above works. Applying In-situ Characterization Techniques to Revealed the ORR Mechanism As is well known, the ORR of the fuel cell is an extremely complex process, which can be two-step pathways (with two electrons transform) or one-step pathways (with four electrons transform) (Fig. 2a). The latter is the preferred pathway, and there exist two possible mechanisms, namely, association or dissociation. Any reaction pathway involves various evolutions of reaction intermediates. Thus, it is essential to monitor and capture the evolution state information on intermediates in a hurry, further facilitating the clarification of the ORR mechanism. In addition, the ORR of the metal-O 2 /air battery is also a complex reaction process with abundant catalytic products (Fig. 2b). Timely capture of the interrelated evolution state information on products is also crucial to reveal its mechanism. Meanwhile, in-situ monitoring process of reaction intermediates and products is also conducive to indirectly reveal the factors affecting the activity of the catalyst, and guide its structure design (Fig. 5, right part). In this section, many relevant in-situ characterization procedures are reviewed below. Notably, the markings in the following works, including ad/ads and *, all mean the surface-adsorbed, that will not be repeated later. Monitoring of the Evolutions for Reaction Intermediates The evolution state information on reaction intermediates can directly help to exposit the specific ORR pathway. Also, the information can indirectly help to identify active site and reveal the reason for enhanced activity, which in turn can help in the design of the catalyst with two or four-electron pathway facilitation for applying in fuel cell. Moreover, considering that the intermediates have characteristics of short survival, low coverage, and vulnerability to other coadsorbed species, it is essential to use in-situ techniques with high sensibility and quick respond capacity to monitor their dynamic evolutions. The following provides an overview of the representative efforts for in-situ monitoring the reaction intermediates. Typically, monitoring the dynamic evolutions of intermediates on Pt/C catalyst during catalytic ORR is useful to ascertain their state and lay the basis for clarifying the complex pathways of the reaction. For example, to explore the surface chemistry changes on the surface of Pt/C catalyst, Nayak et al. [93] used in-situ ATR-IR to monitor the ORR process in fuel cell. In-situ IR spectrum showed that three oxygen-containing intermediates adsorbed on the Pt outside surface during ORR in 0.1 M HClO 4 (O 2 -saturated) (Fig. 9a). Compared to bare carbon carriers, the three peaks were proved to belong to the O-O stretching model (1212 ± 3 cm −1 ) in superoxide (OOH ad ), the OOH bending model (1386 ± 4 cm −1 ) in hydroperoxide (HOOH ad ), and the O-O stretching model (1468 cm −1 ) in weakly adsorbed oxygen (O 2, ad ), respectively. Further, compared to in-situ IR spectrum of Pt/C during ORR in 0.1 M HClO 4 (Ar-saturated) (Fig. 9a), it confirmed that the 1114 and 1030 cm −1 peaks belonged to the asymmetric ClO 4 stretching model of the electrolyte and the symmetric ClO 3 stretching model of the adsorbed ClO 4, ads , respectively. Next, the 1435 and 1330 cm −1 peaks, observed on both Pt/C and bare carbon, were classified as carbon surface functional groups. More, the occasional small peak was discovered at 1260 cm −1 due to the Si-O-Si band of the silicone glue for sealing the optic. More importantly, the integral intensity of the intermediates fitted peak varied with the potential, which confirmed that OOH ad and HOOH ad could increase synchronously and dominated in the ORR process under below 0.8 V. This meant that an additional association pathway did exist in the ORR process, and with the greatest contribution at lower potentials. In general, this work gives IR information for the identification of intermediates on Pt/C catalyst surface, also can act as evidence that the association pathway and dissociation pathway often parallel in the ORR. For the non-metal catalyst, in-situ IR testing is also applicable to ascertain the existed intermediates during its catalytic reaction, further revealing the ORR pathway. For example, Lin et al. [126] used in-situ ATR-IR to monitor Table 2 Capabilities of different in-situ techniques for probing the essential information to identify actives sites and reveal the ORR mechanism intermediates on the outside surface of N-doped carbon catalyst (with various N species) during ORR. In-situ IR revealed that three intermediates, including O 2, ads (1468 cm −1 ), O 2 − * (1052 cm −1 ), and OOH* (1019 cm −1 ), emerged on the catalyst surface during ORR in 0.1 M KOH (O 2 -saturated). Also, the rate-determining step (RDS) for ORR was ascertained as followed, O 2 − * + H 2 O → OOH* + OH − . In addition, the pyridinic N was proved to facilitate ORR along a four-electron pathway, with the carbon atom at its adjacent position identified as the main active site. In this work, the IR information about intermediates on N-doped carbon catalyst surface has been supplied and its direct revealing effect for RDS has been testified. Moreover, it can be used as a basis to determine the active site in reverse by monitoring the evolution of the intermediates. Similar work was that Cheng et al. [127] used in-situ synchrotron radiation FT-IR for monitoring intermediates evolution during ORR in real time to precisely identify the active site of the NiFe MOF catalyst. The results showed that the lattice stress was introduced into the NiFe-MOF lattice through a controlled "photoinduced lattice strain" strategy, which could serve as a contributor for the main active site. During ORR, the *OOH (1048 cm −1 ) intermediate was observed to appear at the high-valence Ni 4+ active site, indicating that the reaction was dominated by the four-electron pathway. This work enhances our awareness of reverse-identification of active site by intermediates monitoring. Further the active site information can help to confirm the structure of the catalyst with the special pathway facilitation. For the transition metal-based SAC catalyst, the different central atomic coordination environments can lead to different ORR pathway. In-situ surveying the dynamic evolutions of intermediates on the catalyst surface by in-situ IR can help to identify the active site, and reveal the relationship of the coordination environment and the ORR pathway selectivity. Taking Co SAC catalyst as an example, Tang et al. [128] used in-situ ATR-IR to reveal the adsorption behaviors of *OOH on CoNC (with pure N) and CoNOC (with combination of O and N) during ORR in 0.1 M HClO 4 (O 2 -saturated), further revealing the ORR pathway selectivity. In theory, for CoN 4 (with pure N), the adsorption site of *OOH was on the Co center, with the ORR dominated in four electrons pathway. For CoN 2 O 2 and CoO 4 (O), the site migrated from the Co to the C adjacent to O, also the pathway turned to two electrons. In fact, as shown in in-situ ATR-IR spectrum of CoNOC, the 1224 cm −1 (main) and 1030 cm −1 (weak) peaks were proven to belong to the O-O stretching model of OOH ad and the OOH bending model of HOOH ad , respectively (Fig. 9b). Notably, the OOH ad peak of CoNOC emerged at a relatively lower potential and was weaker in intensity, compared to CoNC (Fig. 9c), confirming the existence of OOH ad adsorption migration in CoNOC from Co to C (i.e., new active site), in line with above theories. In summary, the ORR pathway selectivity on Co-SAC could be estimated by the synergistic effect of the modified first coordination spheres (CSs) (with N or/and O), and the modified second CSs (C-O-C groups). Namely, the CoNOC catalyst exhibited ORR catalysis along the two electrons pathway, due to the integration of N, O-dual coordination with C-O-C groups. Different from other above works, this work provides a basis for designing catalysts with specific ORR pathway selectivity at the molecular level. In addition to in-situ IR spectrum, in-situ Raman spectrum is another effective tool for detecting reaction intermediates, tracking their dynamic evolutions state, and further revealing the ORR mechanism. Typically, Dong et al. [89] monitored the real-time chemistry changes on the surface of the low-index Pt (hkl) single crystal catalyst during ORR in fuel cell by their self-developed in-situ SHINERS. Insitu Raman evidence of intermediates during ORR in 0.1 M HClO 4 (O 2 -saturated) displayed that, on Pt (111), there existed a stable HO 2 * adsorption (732 cm −1 ). However, for Pt (110) and Pt (100), no HO 2 * adsorption was observed, but the OH* adsorption on their surface all emerged at 1080 cm −1 , with differences in intensity and initial potential. Thus, the possible ORR mechanism in acid solution could be interpreted as follows, firstly, the O 2 adsorbed on the Pt surface to form the adsorbed O 2 *. Then the O 2 * combined with 'H', also with an electron transfer, forming HO 2 *, which as the main intermediate for Pt (111) needed higher activation energy to further combine with 'H' forming H 2 O. Next, the HO 2 * always easily split into OH* and O*, then, the OH* as the main intermediate for Pt (110) and Pt (100) (111), (110) and (100) crystal planes, which can further help to reversely explain ORR mechanism. In their subsequent work [129], the Raman information of intermediates on the high-index Pt (hkl) crystal planes, including Pt (211) and Pt (311), during ORR in 0.1 M HClO 4 (O 2 -saturated), were further obtained also via insitu SHINERS. As shown in in-situ Raman spectrum of Pt (311) (Fig. 9d), there existed a stable peak at 933 cm −1 , due to the ClO 4 − . More, the 1041 and 766 cm −1 peaks appeared at 0.9 V, then the former became stronger while the latter remained constant, as potential decreasing. Notably, both peaks were not observed in the Ar-saturated HClO 4 test, so it was confirmed to be related to the ORR process. To clarify the attribution of the both peaks, an amount of isotope labeling experiments were subsequently performed. In the deuteration experiment (Fig. 9e, red curve), the 1041 cm −1 peak shifted to the 788 cm −1 , and the 766 cm −1 peak shifted to 727 cm −1 , reflecting that they were correlated with "H". In detail, the large change for 1041 cm −1 peak indicated it belonged to "OH", and the change for the 766 cm −1 peak was related to proton interactions, corresponding to "OOH". Then, in the 18 O 2 isotope experiment (Fig. 9e, blue curve), the 1041 and 766 cm −1 peaks shifted to 1001 and 751 cm −1 , proving that they were also associated to the "O" atom, further confirming the conclusion of the deuteration experiment. Moreover, both OH and OOH intermediates also existed on Pt (211), and with larger potential-dependent changes, corresponding to higher ORR activity. Different from the low-index Pt (111), the OOH and OH were more susceptive to the high-index Pt (311) and (211) (Fig. 9f). In conclusion, this work provides the Raman information about OOH and OH intermediates, also proves that the ORR activity of the high-index Pt crystal plane is determined by its surface structure and the intermediate adsorption. Further, the isotope labeling experiments are confirmed as a powerful means of revealing classes of intermediates. Thus, this offers a basis for monitoring the evolution of intermediates in real time to guide the design of Pt-based catalysts with different crystal plane structures. The dynamic adsorption/desorption behaviors of intermediates on the Pt alloy catalyst surface during ORR have also been widely reported. For instance, Ze et al. [130] used in-situ SERS to monitor the dynamic changes of intermediates on the Au@PtNix NP catalyst surface for revealing the nature of its catalytic performance optimization. For Au@Pt NP (Fig. 10a), there only existed a single peak (933 cm −1 ) at starting potential, which was ascribed to the ClO 4 − , and then, as the potential dropped to 0.85 V, another new peak caused by O-O vibration of *OOH appeared at 731 cm −1 . For Au@ PtNi NP with higher ORR activity (Fig. 10b), this new peak (associated with *OOH) appeared at a slightly lower position (728 cm −1 ), and occurred a clear shift toward 719 cm −1 at low potential (0.15 V). That was, the PtNi surface electron structure underwent changes, compared to pure Pt, also with a potential-dependent, further affecting the bond vibration of *OOH. Combining theories, the Ni doping promoted the generation of the binding bond among *OOH and Pt surface, also, the *OOH on PtNi with smaller adsorption energy than that on Pt, could facilitate the electron transfer, and consequently boosting the ORR activity. More, for Au@ PtNix NP, as the Ni content increased, the *OOH occurred in more red-shift, due to the increase of O-O bond length, the decrease of bond energy, and the decrease of *OOH adsorption energy, ultimately more favorable for ORR. In conclusion, the adding of Ni could induce the electron structure changes of the Pt alloy catalyst, and independent with its content, further affecting the intermediates adsorption/ desorption to improve the ORR activity. Thus, it proves that the intermediates adsorption/desorption behaviors can help to reveal the nature of the high activity of the catalyst, and this points to a certain direction for in-situ Raman detection. Similar work has been done for bimetallic nano-catalysts (BNs), such as Chen et al. [131] used in-situ Raman to monitor the distinction of intermediates on the AuCu BNs surface with diverse ordering degrees for evaluating the effect of the ordering degree during ORR at the molecular level. In detail, for 0%-AuCu BN (completely disordered) (Fig. 10c), the *OH intermediate was proved to exist on its surface, corresponding to the Raman peak at 714 cm −1 , with disordered Au-Cu as the active site. For 30%-AuCu BN (Fig. 10d), its Raman peak (716 cm −1 ) belonged to *OH was like to that of 0%-AuCu BN, indicating that its electrochemical behavior was similar as 0%-AuCu BN due to the lower atomic ordering. For 60%-AuCu BN (Fig. 10e), expect for Cu-O ad (625 cm −1 ) specie and *OH (709 cm −1 ) intermediate, a new Raman peak emerged at 671 cm −1 , due to the swing mode of *OH absorption on the ordered Au-Cu site. For 90%-AuCu BN (Fig. 10f), the 623 cm −1 peak and 670 cm −1 new peak still existed, while, the 710 cm −1 peak vanished, then, the new peak (670 cm −1 ) was further confirmed also due to the *OH absorption on the ordered Au-Cu site. Notably, after the disordered-to-ordered transition, the ORR activity of AuCu BN increased more than 2 times, and the one with highly ordered exhibited surpassed performance even than Pt/C. It indicated that both disordered and ordered Au-Cu could act as the sites for *OH adsorption, while the ordered Au-Cu site could make the catalysts with high ORR activity due to its ability to promote the desorption of *OH and its lower affinity for O 2 . Thus, not only the phase change detection can reveal the relationship between the BN order degree and its catalysis activity as discussed in Sect. 4.1, but also the dynamic change detection in intermediates also has the same effect based on this work. Further this extends our synthesis design basis and gives new ideas for the preparation of BNs precisely ordering degrees in favor of ORR activity. Except for the adsorption of intermediates, external adsorbents also have a notable impact on the ORR activity of the catalyst, thus, the corresponding real-time characterization is an equal emphasis to reveal the mechanism. As an example, Wang et al. [132] utilized in-situ SERS to explore the ORR mechanism of the commercial Pt black with appropriate sulfide adsorption (θ S = 0, 0.19, 0.36, 1). In-situ SERS spectra of Pt surface showed that (Fig. 10g-j), the ν(Pt-S) peak (300 cm −1 ) intensity was positively correlated with the sulfide content, however, the ν(Pt-O) peak (580 cm −1 ) intensity exhibited in negative correlate, and disappeared on the PtS 1 (θ S = 1) sample. Normalization of the integrated peak intensities of ν(Pt-O) for PtS 0 , PtS 0.19, and PtS 0.36 showed that (Fig. 10k), the onset of surface Pt oxidation (SPO) shifted toward higher potentials of 0.9, 1.0, and 1.05 V, respectively. Both data demonstrated that the sulfide adsorption made SPO located in a low degree, namely, the Pt surface with more sulfide exhibiting more resistance to oxidation. The reasons were speculated as follows (Fig. 10k, inset), the residual available Pt sites had more negative chargers, caused by the sulfide adsorption, thereby with enhanced oxidation resistance. To the end, it proved that the moderate external adsorption (such as, sulfide, θ S = 0.2-0.4) contributed to the residual available Pt sites in a better state, exhibiting better ORR activity and stability. Also, it points out a new idea for designing highly active catalyst, i.e., to focus not only on the adsorption of reaction intermediates, but also on the adsorption of external adsorbents. As mentioned above, a variety of intermediates are proven to be formed during ORR, which then further adsorb on the active site, and finally precipitating from the catalyst surface after a corresponding reaction. In-situ monitoring their dynamic evolutions can directly reveal the ORR mechanism. In addition, their adsorption/desorption on the active center depend on the electronic structure of the catalyst, which has close relation to the active site. Thus, the detection of intermediates is help to identify the active site, which in turn guides the design of the efficient catalyst (main with four-electron pathway facilitation). Notably, the adsorption energy of them is a vital descriptor of ORR performance, and too weak adsorption can make the activation of the catalyst difficult, while too strong can cause poisoning, thus limiting the catalytic activity. Therefore, the developed catalyst should with suitable adsorption effect for intermediates. Many factors, such as dopant, high-valence metal Monitoring of the Evolutions for Products For metal-O 2 /air battery, the complex pathways of ORR are closely related with the catalytic products in the catalytic process. The detecting of their evolution states is vital for indirectly revealing the active site of the catalyst, and directly identifying the ORR pathway. The different products are different phase, so in-situ XRD and Raman are important for tracing their evolution process. Except for in-situ monitoring of the phase transitions of the products formed on the catalysts surface, in-situ description of dynamic morphological evolutions of the charger/discharger products is also crucial to reveal the catalytic process. The following representative corresponding progress is reviewed in this part. For the detection of phase transitions, there exists a typical example. To ascertain active site of the heteroatom-doped carbon air electrode (i.e., Meso-CoNC@GF) during ORR and reveal the reversibility of Zn-Air battery, Liu et al. [81] used in-situ Raman and XRD to track the phase transitions of the products on its surface. The superior reversibility of ZnO product was visually displayed in-situ Raman spectrum. A peak attributed to ZnO gradually appeared at 413 cm −1 and enhanced, during discharge (i.e., ORR); in contrast, this trend was reversed for charging (Fig. 11a). The similar phase transitions were further confirmed in-situ XRD pattern, the peak of ZnO (JPCDS No. 36-1451) continuously enhanced during ORR and then disappeared after charging (Fig. 11b). Notably, two peaks belonging to carbon, at 44° and 54°, gradually became weaker during ORR (Fig. 11c); contrarily, there were no changes of the two peaks in the non-nitrogen doped electrode. This indicated that the carbon with nitrogen dopant, as the main active site, caused the electron deficiency, exhibiting a stronger catalytic effect for the reaction. In detail, the nitrogen with higher electronegativity doping in carbon could cause adjacent carbon atom to fall into electron deficiency, favoring the chemisorption of O 2 /oxygencontaining intermediates. The successive chemisorption gradually modulated the carbon, leading to the weakening of two carbon peaks. Finally, the two weak peaks could recover as before after charging, proving the reversible of the phase transforms on the specific active site. That is, in-situ results showed that Meso-CoNC@GF as the air electrode ensured the superior reversibility (Zn + O 2 ↔ ZnO) of Zn-Air battery, also nitrogen-doped carbon was high activity for catalyzing ORR. Thus, this work proves that nitrogen-doped carbon can act as specific active site with highly catalytic activity in ORR, providing a basis for designing the Zn-Air battery catalyst with excellent reversibility. For the detection of morphological transitions, including nucleation, growth and so on, the power tool is in-situ TEM. In addition to the application of in-situ TEM to monitor the morphological transitions of the catalysts in their synthesis process (as discussed in Sect. 4.3), many studies have also focused on its application of charger/discharger products generated on the catalysts surface in the catalytic reaction process. This process can reversely determine the optimal structure of the catalyst, which can promote products generation and reversible evolution. Zhu et al. [133] applied in-situ aberration-corrected TEM with a rational environmental cell to promulgate morphological evolutions of the products during ORR catalyzed by the bimetallic Pt 0.8 Ir 0.2 -doped carbon nanotube (Pt 0.8 Ir 0.2 @CNT) in a Na-O 2 battery. During start discharge, Na + rapidly diffused to Pt 0.8 Ir 0.2 @CNT driven by a potential of − 0.5 V, resulting in volume swell (from 50 to 80 nm) (Fig. 11d, 0-180 s). After the entire intercalation of Na + at 180 s, a hollow sphere gradually formed on the catalyst surface (Fig. 11d, 185 s). Subsequently, this sphere continuously grew (from 30 to 120 nm), also with a raising in the shell thickness (from 4 to final 15 nm, at 246 s) (Fig. 11d, 185-246 s). Notably, during the process (i.e., ORR), Na + firstly reacted with O 2 at the Pt 0.8 Ir 0.2 site on the CNT surface to form the NaO 2 phase, followed by ceaselessly amplifying to form a larger sphere, and further breaking down to Na 2 O 2 sphere with hollow structure (Fig. 11e). During charging, the special sphere quickly shrank and gradually disappeared driven by a potential of + 0.5 V at 257-304 s (Fig. 11d). This was due to the deintercalation of Na + and the escape of O 2 leading to the complete decomposition of NaO 2 and Na 2 O 2 (Fig. 11f). That was the Pt 0.8 Ir 0.2 @CNT catalyst with Pt 0.8 Ir 0.2 as activity site could efficiently facilitate product generation and reversible evolution. This work has realized the visual monitoring of the dynamic changes of catalyst surface morphology during ORR. Further, it provides visual evidence for identifying the product, and guides the structural design of the highly active catalyst (such as with reasonable Pt/Ir ratio) in reverse. More importantly, a serious problem for metal-O 2 / air batteries is that the O 2 /air electrode is easy to occur passivation, induced by the property of the formed discharge products. More researches focus on the redox mediator, based on its unique characteristic of transferring charge from electrode to electrolyte. To develop available redox mediator, in-situ tracking the morphological evolutions of their products can help to provide rational guidance. For example, Lee et al. [134] used in-situ TEM also with liquid cell to pursue the growth morphology changes of Li 2 O 2 discharge product in the electrolyte with the catalysis of redox media (2,5-di-tert-butyl-1,4-benzoquinone (DBBQ)) during discharge (i.e., ORR) of the Li-O 2 battery. In detail, the Li 2 O 2 product firstly grew laterally to form a disk shape, and then grew along the vertical direction, especially with high speed in periphery edge, eventually, forming a toroidal shape (Fig. 11g). The growth rate was related to the distance between Li 2 O 2 and the cathode, during which the DBBQ facilitated the transferring of charge from electrode to solution. The results proved that DBBQ could serve as one kind of redox mediator for promoting the growth of the Li 2 O 2 product in the electrolyte not in the electrode during ORR, further avoiding the electrode passivation. This work can act as the evidence for in-situ visualization of the ORR process in liquid system, and point out the ORR mechanism going along the Fig. 11 a In-situ Raman spectrum, b in-situ XRD pattern, c the in-situ XRD intensity map of Meso-CoNC@ GF during discharging (i.e., ORR) and charging in Zn-Air battery [81]; Reused with approval; Copyright 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim. d In-situ TEM images of the product on the Pt 0.8 Ir 0.2 @CNT surface, at 0-191 s during discharge (ORR), and at 246-304 s during charging in Na-O 2 battery, with schematics of e ORR process and f OER process [133]. Reused with approval; Copyright 2019 American Chemical Society. g Growth mechanism schematics of the product in the electrolyte (upper part), and the corresponding SEM image (bottom part), with DBBQ as catalyst, in the Li-O 2 battery [134]. Reused with approval; Copyright 2019 American Chemical Society 1 3 solution growth mechanism. Also, it proved that the charge transform ability (from electrode to solution) of the catalyst is essential for efficient ORR, which is basic for the design of the ORR catalyst for the metal-O 2 /air battery. Another serious problem is that the crucial factors with effect on morphological evolutions of products for metal-O 2 / air batteries are mostly unknown. Combining in-situ electron diffraction and imaging to identify the factors is necessary. In the works of Han et al. [135], the dynamic evolutions of products in the ORR and OER processes, with Cu-based catalyst on the cathode, were observed by in-situ TEM. During ORR, NaO 2 formed at the early stage, and then disproportionated to mesoporous particle-shaped Na 2 O 2 (with orthorhombic/hexagonal structure), eventually as the main product equably covering on the whole cathode. During OER, the Na 2 O 2 reversely converted to NaO 2, accompany by volume expansion, and the latter further reversibly decomposed to Na + and O 2 . Notably, in-situ electron diffraction revealed that the reversible cycle depended on Cu nanocluster formed by in-situ solidified CuS. It proves that the size, composition, and distribution information of Cu nanocluster are essential for the modulation of the morphology, component, and size of product in the reaction process. In summary, this can act as another pivotal evidence for the rational design of the catalyst. It is necessary to regulate the indicators mentioned above affecting its structure to promote the reversibility of a Na-O 2 battery. In addition, in-situ AFM is also a powerful tool for the monitoring of morphological evolutions of the products during ORR in metal-O 2 /air battery, especially for interface. That is, it is another useful way to clarify the pivotal effect factors guiding the design of catalyst structures and to unravel the ORR mechanism. Shen et al. [97] proceeded this monitoring on the three-phase interface (Au electrode/electrolyte/O 2 ) during ORR in the Li-O 2 battery by in-situ AFM. In detail, the Au nanoparticles (NPs) and Au L14-P5 (with pore size ~ 5 nm and ligament width ~ 14 nm) were used as electrodes, which were prepared by sputtering Au onto highly oriented pyrolytic graphite (HOPG), also with Teflon tape as the mask, at different sputtering time. Then, in-situ AFM of Au NPs/Au L14-P5 composite electrode (Fig. 12c) showed that the products, ring Li 2 O 2 (diameter ~ 500 nm, thickness ~ 150 nm), mainly concentrated on Au L14-P5 (Fig. 12e), and only little distributed on Au NPs (Fig. 12d) during ORR. That indicated that Au L14-P5 had a stronger ORR catalytic activity. This was due to the higher nucleation potential of Li 2 O 2 on Au L14-P5 (2.61 V) than on Au NPs (2.54 V), making the prioritized nucleation and growth of Li 2 O 2 on Au L14-P5 (Fig. 12a). Even in the end of ORR (Fig. 12b), large-sized Li 2 O 2 products were also primarily distributed on AuL14-P5, and there were little products on the Au NPs area. This further proved that Au L14-P5 had a more rational structure for higher ORR catalytic. This work shows that the nanostructure of Au determines the catalytic behavior of its surface, and the suitable size and nano-porous are the important factors in the design of the catalyst structure. The observed ring Li 2 O 2 products on the composite electrode point out the ORR mechanism going along the electrode growth mechanism. Afterwards, they used in-situ AFM again to proceed with similar monitoring on the Pt nanoparticles (NPs) electrode during ORR cycle, also in the Li-O 2 battery [98]. In first oxidation reduction cycle (ORC), the original electrode (Pt-0) with 5 nm Pt NPs boosted ORR mainly through the surface path (LiO 2 + Li + + e − → Li 2 O 2 ) (Fig. 12f). Also, the O 2 reduction peak of Pt-0 was at 2.51 V. And in-situ AFM showed that many products NPs (thickness ~ 10 nm) firstly deposited on Pt-0 surface, then converted to nanosheets (Li 2 O 2 ) and further grew (finally with thickness to 20 nm) to cover on its surface (Fig. 12h). During 0-80 ORCs, the Pt-80 electrode with 10 nm Pt NPs boosted the ORR mainly through the solution-mediated route (2LiO 2 → Li 2 O 2 + O 2 ) (Fig. 12g). Compared to Pt-0, the O 2 reduction peak of Pt-80 occurred positive shift (from 2.51 to 2.58 V), also with an increased peak intensity. And in-situ AFM showed that many products NPs (diameter ~ 10 nm) firstly deposited on Pt-80 surface, then quickly grew into new structures with diskshaped, with further increase in thickness and diameter, and finally grew into the ring-like Li 2 O 2 (diameter ~ 300 nm) (Fig. 12i). After 250 ORCs, the Pt-250 electrode with overgrown Pt NPs exhibited seriously decreased catalytic activity, also, with smaller and less Li 2 O 2 on its surface (Fig. 12j). Notably, the modified Pt/Au electrode, namely, with adding Au NPs on Pt NPs, could exhibit high ORR activity like Pt-80, also with ring-like Li 2 O 2 (diameter ~ 200 nm) as the final products (Fig. 12k). In conclusion, it proved that the Pt NP diameter could grow with the ORCs, and the Pt NP with the suitable diameter could ensure high ORR activity, promoting the reaction along a solution-mediated route. Then the excessive growth of the Pt NP could reduce the catalytic activity, and the electrode modification was useful to maintain high catalytic activity. This work further strengthens our awareness to use in-situ AFM to detect the evolution state of the products and thus to reveal the ORR mechanism. And it provides a powerful basis for revealing the ORR mechanism and in-depth clarifying the effect factors for the enhanced catalytic activity during ORCs. Also, a suitable size or modification structure is essential for developing high-activity catalyst. In summary, the different products have different phases and different morphologies, so in-situ Raman/XRD and insitu TEM/AFM are equally important for tracing their evolution process to clarify the ORR mechanism. These can reveal that some catalyst indicators, such as doping, metal ratio, charge transform, size, composition, distribution, porosity, and modification, can affect the ORR activity, further guiding the catalyst design. Monitoring of Electrolyte Interfacial Anion Chemisorption The adsorption of anions in the electrolyte on the catalyst surface can trigger its poisoning and make the ORR process become more complex. It is difficult to track catalyst contamination with traditional technologies, making poisoning problem difficult to solve and making the elucidation of the mechanism difficult. It is essential to probe in-situ techniques to check the dynamics of anion adsorption/desorption. It can offer useful information for revealing the ORR mechanism and guiding the development of catalysts to inhibit anion adsorption without affecting catalytic performance. For the detection of common anions, the technique of in-situ IR is widely used. For instance, Nesselberger et al. [136] observed some anion adsorption behaviors on Pt/C catalyst during ORR in PEMFC in real time utilizing in-situ ATR-FTIR. In 0.5 M H 2 SO 4 (Fig. 13a), in-situ ATR-FTIR showed there could be fitted four bands for Pt/C catalyst, with bare C as a comparison. The SA 1 (1045 cm −1 ) emerged on Pt/C and bare C, with less intensity, thus assigned to a bisulfate vibration mode [137]. For SA 3 (1180 cm −1 ) or SA 4 (1235 cm −1 ), both with increased intensities on Pt/C, thus belonged to the mutual effect of (bi)sulfate with carbon, or the superposed effect of (bi)sulfate with Pt and C-O stretching, respectively. In 0.5 M HClO 4 (Fig. 13b), in-situ ATR-FTIR showed there firstly existed fitted three bonds, followed by transformed to two peaks. The PCA 1 (1050 cm −1 ) or PCA 3 (1250 cm −1 ) were assigned to the mutual effect of perchloric acid with carbon, or with C-O stretching, respectively. For PCA 2 (1095 cm −1 ), attributed to the Cl-O vibration style of the perchlorate anion, with notable varieties on the Pt/C, and ultimately overlapped with PCA 1 and vanished. In 10 mM H 3 PO 4 + 0.5 M HClO 4 (Fig. 13c), in-situ ATR-FTIR showed a new PA 1 (1000 cm −1 ) emerged, while PA 2 (1075 cm −1 ) and PA 3 (1235 cm −1 ) remained in line with PCA 1 and PCA 3 . The PA 1 band (H 2 PO 4 − mode) belonged to the mutual effect of phosphoric acid with Pt/C, and was transformed from the H 3 PO 4 absorbed on Pt (induced by C 3v symmetry, 1040 cm −1 ). In summary, the fitted bands in different electrolyte could act as the special functions to depict the adsorption behaviors of different anions on Pt. This work provides the IR information about these common solution anions, also as a basis for characterizing the effect of anion adsorptions on ORR mechanism. However, for some anions without obvious IR signal, such as halides, in-situ IR is unapplicable. In addition, to ensure in-situ qualitative and quantitative characterization of the anion adsorptions on the catalyst surface at a low level, the in-situ ETS has been used in special catalyst to monitor ORR process in fuel cell. For example, Ding et al. [103] detected dynamic changes of anion adsorptions (sulfates, chlorides etc.) and surface oxidation on ultra-thin Pt nanowire (Pt NW) by in-situ ETS and correlated them with the kinetics of ORR. For sulfates (Fig. 13d), at negative potential region, the ETS current increased/decreased notably, with the adsorption/desorption of monolayer H atom (H ads/desop ) (green frame). At double-layer (D.L.) region, the ETS current remained flat, with H 2 O as the main adsorbed specie (blue frame). At positive potential region, the ETS current also changed notably, indicating the existence of reversible hydroxyl group (OH ads ) adsorption (orange frame) and surface oxidation (purple frame). The differential ETS (dETS) (Fig. 13e) showed that both O ads and O desop (dashed arrow 2 and 3) decreased as sulfate content increased; also, OH ads region shifted notably (red dashed arrow), while surface oxidation region remained unchanged. It meant that the sulfates adsorption on the Pt surface occurred by co-adsorption with hydroxyl groups. For chlorides (Fig. 13f), at D.L. region, the ETS current decreased notably (downward dashed arrow) due to the Cl ads adsorption in the inner Helmholtz plane (IHP); at high potential region, it increased slightly (upward dashed arrow) owing to the slight surface oxidation. The dETS (Fig. 13g) showed that both O ads and O desop decreased as chloride content increased, owing to the decrease in O ads adsorption sites blocked by Cl ads ; also, O ads peak shifted notably, suggesting that the overpotential of O adsorption increased due to the blocking of Cl ads . It meant that the chlorides adsorption on the Pt surface changed the kinetics of the intermediate ORR steps, resulting in Pt poisoning. These results suggested that the complex interaction between anions adsorption and oxide formation was the key factor for causing site blocking and then affecting ORR kinetics. This work provides the ETS information about the special solution anions without IR signal, such as chlorides. It indicates that in-situ ETS is beneficial to assist in-situ IR to reveal actual changes in the ORR process, thereby facilitating the elucidation of the ORR mechanism. In addition, it is essential to in-situ monitor both anions adsorption and surface oxidation information during ORR to guide the design of catalysts with ability to resist corrosion and oxidation. The common SO 4 2− adsorption on the Pt surface is more severe, while the interaction between ClO 4 − and Pt is weaker. In-situ IR can act as the key monitor technique for these anions (with IR signal). In addition, more toxic to Pt surface is Cl − , which often has complex interactions with the formed oxides, blocking the catalytic site and affecting ORR kinetics. Due to without IR signal, it monitoring needs to be used in-situ ETS. Thus, for Pt-based catalyst, ORR testing is preferably performed in HClO 4 solution. Comminated with in-situ techniques, anion adsorption on any modified catalyst can be traced to visualize the adsorption information and identify the soundness of the catalyst structure design. Further, based on the obtained information, it can light the structure design direction of the corrosion-resistant and antitoxicant catalyst, and help to elucidate the ORR mechanism. According to the above discussion, it can be further proved that each technique has its uniqueness for the monitoring of various species during ORR. The detection of dynamic changes of intermediates, products, and anions with different in-situ techniques is summarized in Table 2. Prominently, the evolution of intermediates, dominated by molecular vibrational changes, can be specifically provided by in-situ IR/Raman. The evolution of products is more prominent with morphological changes, and the application of in-situ techniques with morphological detection functions is more prominent. The solution anions signals can be offered by in-situ IR/Raman or in-situ ETS (especially for halides). Considering that not only one species is included in the complete ORR process, it is crucial to combine several techniques for obtaining the rich information on the various species, further guiding the revelation of the mechanism. For example, except for the intermediates, products and anions information, the coordination/bonding, spin state information detected by in-situ XAS and Mössbauer spectra (as discussed in Sect. 4) are also important. More detected species on some represent electrocatalysts are summarized in Table 3 by using various in-situ techniques. Inferring Reaction Mechanism by Associating In-Situ Studies with Theoretical Calculations Different from conventional techniques, in-situ techniques can supply expedient experimental guidance for the ORR of catalysts by checking the dynamic evolution of intermediates, corresponding catalytic products, electrolyte anions, and external adsorbents, etc. in real time. In addition, in conjunction with theoretical computational techniques, the construction of theoretical models for imitating the catalytic process is beneficial for further in-depth investigation of the elements impacting ORR performance from the kinetic direction at the atomic standard. This is essential to reveal the ORR mechanism and the relevant works are summarized below. Ab initio FEFF8 calculation, as a typical auxiliary tool for XAS, is always used to combine with in-situ XAS during ORR to reveal the electronic structure changes of the catalyst and carry out the reason for its enhanced activity. For example, to inquire into the causality of the higher activity for de-alloyed Pt 1 Co 1 NP catalyst during catalyzed process, Jia et al. [144] merged in-situ XAS, ex-situ HAADF, with FEFF8 to identify the changes during ORR in PEMFCs, also in comparison with pure Pt catalyst. The experimental results showed that the de-alloyed Pt 1 Co 1 NP had a peculiar core-shell structure, with the ordered Pt 1 Co 1 as the core, and the ultrathin and nonporous Pt skin as the shell. Combined with FEFF8 to reveal the effect of the electronic properties, it showed that the d-band center (ε d ) of Pt 19 Co 6 (111) or Pt 10 Co 4 (100) with the presence of Co was lower than Pt 25 (111) or Pt 14 (100), respectively. In conclusion, the higher catalyst activity of de-alloyed Pt 1 Co 1 NP for ORR was due to the lower ε d induced by both ligand effects and compressivestrain, which was initially triggered by the Co enriched in the ordered Pt 1 Co 1 core and near the subsurface. It proves that FEFF8 plays an important role to reveal the relationship of the catalytic activity and electronic structure changes of the catalysts. Furthermore, the FEFF8 calculation can also help to identify the real active site of the catalyst, such as for the M-N-C, especially the derived SAC, still with an elusive active site, it is very crucial to apply FEFF8 in in-situ experimental analysis. In the work of Lien et al. [145], to identify the bonding site of oxygen/ derived species on the pyrolyzed Vitamin B12 (py-B12/C, i.e., Co-Nx/C including biomimetic ligands) during ORR in fuel cell, they used insitu XAS and FEFF8 to analysis the experimental speculations and results. As shown in in-situ Co L 3,2 -edge XANES (Fig. 14a), there were no changes for the Co 2+ peak in the precatalytic process (at 1.2 V) compared to the original sample; at 1.0-0.4 V; this peak occurred in a positive shift; at 0.2 V, this peak returned to its initial position. These were due to that at first the O 2 adsorbed on py-B12/C only by diffusion control. Then a chemical reaction occurred with an electron transfer from the Co 3d orbital to O 2 *. Finally, the product desorbed from the Co site, making it available again. Thus, these meant that the metal Co, as the active center, was partially oxidized during ORR, more importantly, the changes of Co 2+ peak (at 1.0-0.2 V) were thought to be attributed to the sum of several Co 2+ -oxo intermediates. In detail (Fig. 14b) In-situ Raman Density functional theory (DFT) calculation, as a powerful auxiliary tool to reveal the ORR mechanism, has been widely used to predict the adsorption energy of reaction intermediate, and provide the information on activation energy and electronic structure of catalyst [58]. Based on the DFT results, a large amount of works have been done to explain experimental phenomena at the atomic scale. To make clear the possible ORR mechanism for Pt 3 Co NP catalyst during reaction in fuel cell, Wang et al. [140] delved the association mechanism of *OOH on its surface via insitu SHINERS combined with DFT. In 0.1 M HClO 4 , insitu EC-SHINERS spectrum showed that (Fig. 14d) . 14e), the 557 cm −1 peak (Pt-O) was only visible at high potential, and a broad peak appeared at 0.7 V then split into the 698 cm −1 (*OOH) and 750 cm −1 peaks with decreasing potential. Combining the DFT, the 750 cm −1 peak was related to the *OH adsorbed on the Co site. Notably, the *OH signal only existed in alkaline, proving that Co had a high oxygen affinity and could co-exist with Pt on the catalyst surface in alkaline, thereby resulting in worse ORR activity, contrarily, Co would leach out in acid, so without *OH signal, and with better activity. More importantly, taking *OOH as the intermediate, the possible ORR process in acid solution was explored by DFT (Fig. 14f) Notably, it supplies evidence that the DFT also can help to reveal the essence of the performance improvement from the aspect of intermediate adsorption, and lattice strain. The DFT calculation can also be used as the key tool for the identification of the real active site of the catalyst. Moreover, for the M-N-C-like SAC catalyst, the identification of its active site is crucial, as described in Sect. 4.4, yet the real active site remains elusive. Thus, its determination requires not only in-situ techniques to characterize but also DFT calculation to verify experimental speculations and results. To exposit the real active site of the Cu-N-C SAC during ORR in PEMFCs, Yang et al. [138] monitored the dynamic evolution of the Cu-N site by in-situ XAS, and in combination with DFT. Concretely, as shown in in-situ Cu K-edge XANES (Fig. 15a), the white-line peak (E) occurred in decline and negative shift, while the B peak belonging to Cu + gradually appeared and obviously increased, under potential drive. This implied that Cu 2+ was gradually reduced to Cu + , while Cu + was produced at the cost of Cu 2+ , thus presumably Cu single atom shuttled between Cu + and Cu 2+ during catalyzed ORR. To clarify the dynamic changes of the active center ( Fig. 15b-d), in-situ spectra at 0.82, 0.50, and 0.10 V were modeled by the finite difference method near edge structure calculation, and finally exhibited good matches with the structures of Cu-N 4 , Cu-N 3 , and Cu-N 2 -OH, respectively. This meant that the active center of the Cu-N-C catalyst changed dynamically under potential drive as follows: Cu 2+ -N 4 → Cu + -N 3 → HO-Cu + -N 2 . To determine the reason for Cu + -N 3 with higher ORR activity (Fig. 15e), the ORR free energy changes of Cu + -N 3 and Cu 2+ -N 4 were calculated by DFT under U RHE = 0 V, and the energy profile of the former (with a constant decline) was always lower than that of the latter (with an increase in first step) at all the process. This indicated the rational reason was that the O 2 → *OOH proceeded spontaneously on the Cu + -N 3 site, while, it needed to overcome 0.28 eV barrier on the Cu 2+ -N 4 site, and subsequent evolution of *O and *OH intermediates were also easier to occur on the Cu + -N 3 site. So, Cu + -N 3 was the real active site structure for the Cu-N-C SAC. This work complements the role of DFT in that it facilitates auxiliary in-situ testing to reveal structure varies in the active site and to clarify the main active site of the catalyst during ORR. Furthermore, to resolve the question of low metal loadings for M-N-C-like SAC, more studies focus on the creation of defective carbon (such as with vacancy and heteroatom) and dual-metal atomic sites. For the latter improvement option, the accurate identification of the master active site is relatively complex, so it is necessary to combine DFT calculations with the in-situ characterizations for the analysis. For example, to identify the master active site of the Fe/Co dual single atoms loaded porous N-doping carbon nanofibers (Fe, Co SAs-PNCF) during its catalysis reaction, Jiang et al. [146] combined DFT, and in-situ XAS, Raman to produce the analysis. In detail, the Fe and Co K-edges EXAFS spectra of the Fe, Co SAs-PNCF were fitted with a series of DFT simulated FeCo-N models, and the result in good agreement with the N 3 -Fe-Co-N 3 model, implying that the theoretical master active site was N 3 -Fe-Co-N 3 . Then in-situ Raman analysis showed that the FeOOH and CoOOH species were observed to gradually transform to the Fe(OH) 2 and Co(OH) 2 species during ORR, indicating both Fe and Co species as the active sites. Further, in-situ XAS analysis showed that the Fe and Co oxidation states all occurred in reduce during ORR, conducting that there existed Fe and Co sites as the active centers. Thus, the N 3 -Fe-Co-N 3 with Fe and Co dual sites was identified as the master active site. It ensured high metal loadings for the Fe, Co SAs-PNCF, namely, it increased the number of effective active sites, further decreasing the reaction barriers and boosting the ORR activity in metal-air/O 2 battery and fuel cell. This work provides evidence that the DFT can help accurately identify the master active site of the catalyst in complex conditions and reveal the nature of the activity improvement. Notably, the electrocatalytic activity of Fe-N-C derived SAC, as the higher activity M-N-C-like SAC, has been widely proved to be related to its spin state, as shown in Sect. 4.4. However, the relationship between the catalytic activity and the spin state of this SAC is still unclear, also, the reason for its poor stability remains unclear. It is essential to use the ab initio molecular dynamics (AIMD) to simulate the interaction of O 2 with Fe-N-C under diverse conditions further helping to exposit the structural evolution during ORR. Recently, to reveal the above-mentioned relationship, and poor stability reason, Xu et al. [143] minutely analyzed the dynamic evolution of the active sites in Fe-N-C catalysts during ORR in PEMFCs via in-situ Mössbauer spectroscopy, DFT and AIMD. As shown in in-situ 57 Fe-N-C Mössbauer spectra (Fig. 15f), there existed D1, D2, D3, and D4 four double peaks, as the potential decreased, the D1 (low-spin (LS) FeN 4 C 8 and high-spin (HS) FeN 4 C 12 ) decreased notably, and the D2 (LS or intermediate-spin (MS) FeN 4 C 10 ) only occurred in decrease at higher potential then remained stable at below 0.7 V, in contrast, the D4 (FeN 4 adsorbed with oxygen species [124,147]) increased significantly. This meant that both D1 and D2 sites underwent demetallation during catalyzed ORR, thus resulting in the degradation of Fe-N-C catalyst, notably, the D1 with a faster demetallation rate was regarded as the main reason of catalyst degradation. Moreover (Fig. 15g), the D1 occurred more obvious transition during catalyzed ORR at lower potential, when there were more intermediates in the reaction. This implied that the D1 was the main active site and largely contributed to the higher ORR activity. Combined with DFT, the active sequence was FeN 4 C 12 (D1) > FeN 4 C 8 (D1) > FeN 4 C 10 (D2), in line with that D1 had high activity. Further, the stability sequence was calculated by DFT as FeN 4 C 12 (D1) > FeN 4 C 10 (D2) > FeN 4 C 8 (D1), also supporting the results: FeN 4 C 8 (low stability, D1) suffered demetallation, and leading to the rapid degradation of Fe-N-C in early stage; while FeN 4 C 12 (high activity and stability, D1) persisted existence, and deciding the subsequent high activity of Fe-N-C. In-depth studies showed that Fe-N-C would decay rapidly only in the occurrence of both more intermediates and potentials. This might be due to that the Fe-N bond strength decreased sharply after extensive intermediate adsorption also under the electric field condition, further aggravating the demetallation. According to the real ORR state simulated by AIMD, for FeN 4 C 8 , its Fe-N bond length was 1.841 Å under ideal condition (Fig. 15h) (Fig. 15i). The length of Fe-N bond for FeN 4 C 8 was only almost 20% of that in the ideal state, so it was prone to breakage and demetallation, in line with the poor stability result, also verifying the speculation of the reason. This work supports that AIMD is a powerful method to simulate real operating conditions for showing structure varies of the active site further revealing the real reason of poor stability. Also, it proved that it is necessary to combine DFT with AIMD to assist in-situ analysis. For transition metal-based catalysts, except for the above discussed M-N-C-like catalysts, there exists some valence states complex catalysts discussed in Sect. 4.2, a combination of calculation and in-situ characterization is also necessary to reveal the essence of enhanced activity. Such as, for Li/Na/K/ Rb/Cs-MnO X catalysts with complex valence states, the affection of inserted alkaline cations (Li/Na/K/Rb/Cs) among adjacent layers of multilayer MnO X towards ORR was explored by Kosasang et al. [148] via coupling DFT with in-situ XAS. The results showed the ORR activities were as follows: first, Li-MnOx, then, Na-MnOx and K-MnOx, finally, Rb-MnOx and Cs-MnOx. Based on DFT, the Gibbs free energy plots of each ORR steps on these catalysts showed that the free energy variation of the first step (with adsorbed *OH) was the largest, which was considered as the decisive step of the reaction. Notably, Li-MnO x exhibited the smallest first free energy; thus, it possessed the highest catalytic activity for the ORR. Further, the excellent catalyzed process should accompany by the decrease of Mn oxidation state, similar as the work of Yang et al. [84]. In this work, in-situ XANES of Mn K-edge of Li-MnO x revealed the valence states of Mn did occurred in change from + 3. 28 Based on the above works, it can show that the substantial relevant information, such as d-band center, electron transfer, adsorption energy, and Gibbs free energy, can be obtained through the above theoretical calculations. Further, these can help to elucidate the relationship of structural information (such as atomic arrangement and chemical composition) and the performance for the catalyst, also help to identify the real catalytic site/center, thereby guiding the synthesis of a novel catalyst with wide application (not only in fuel cell but also in metal-air/O 2 batteries). Moreover, these can help reveal the adsorption conformations of intermediates, further clarifying the ORR mechanism. However, there exists some inherent drawbacks in these calculations. In detail, FEFF8, as a dedicated calculation for XAS, is not suitable for other in-situ techniques. For DFT [149][150][151], it is always carried out in vacuum, which differs from the real complex reaction conditions, resulting in some discrepancies among experimental and calculated results. Also, the large number of approximations and the limited number of calculated atoms make the DFT results somewhat inaccurate and lacking in generality. For AIMD [152,153], the large computational workload makes it unsuitable for complex systems. In addition, the established descriptors are scanty and with strong specificity, which further limits the use of these calculations. The d-band center can only be used for transition metal-based catalysts, but not for main group element series. In conclusion, considering that these calculations have some inherent drawbacks, it is not appropriate to use them directly, but it is better to use them as an aid to in-situ characterization during ORR. Also, it is necessary to develop more new descriptors for monitoring the catalyst activities of multifarious catalysts based on the theoretical calculation. Designing In-Situ Test System and Coupling Multiple Techniques for ORR Detections Although great progress has been made in the use of in-situ techniques for structural monitoring of representative catalysts and for the detection of the evolutionary state of species on their surfaces, there are many challenges to the widespread application of each technique ( Table 1). As shown in Sect. 2, each in-situ technology has its own specific selfmade cell for testing; additionally, even when in-situ hard X-ray with good penetration is used to test ORR, the reaction solution, air bubble, and cell component all also will cause interference for the signal. Thus, a rational coupled in-situ characterization technique with other variables, such as in-situ cell, or other ex-situ technique, is critical when performing in-situ testing during ORR and beneficial for collecting high-quality data to elucidate the ORR mechanism. And the corresponding progress is further reviewed in this part. Moreover, the abbreviations in the following works, including WE, CE and RE, mean working electrode, counter electrode, and reference electrode, respectively, that will not be repeated later. To guide the rational design of in-situ cell, its core skills and modification methods are summarized. For hard X-ray tests, such as in-situ XRD test, to clarify the relationship among intermediates coverage and catalytic activity of Ptbased catalyst during ORR in Zn-air battery, Ji et al. [154] designed an open cell to reflect information in terms of Pt (111) peak intensity changes not phase changes. In detail, the open cell resembled an upside-down three-electrode electrochemical testing system, and the WE with catalyst layer was right in the center for facilitating ray detection (Fig. 16a). Moreover, the design of the recessed chamber not only ensured that the electrolyte did not overflow, but also avoided the influence of the window on the ray signal. Subsequently, in-situ XRD observation coupling with open cell showed that there existed significant (111) peak intensity changes, due to continuous intermediates coverage, and corresponding to the higher ORR activity. This meant that the rich oxygen adsorption resulted in higher activity for the Pt-based catalyst, and the change in peak intensity could act as a bridge between them. In conclusion, this work offers a new idea to design a cell with a special open chamber. The open cell design helps to ensure that in-situ XRD observation of peak intensity can serve as a descriptor of catalytic activity for catalyst, thus guiding in-situ analysis from multiple perspectives, not only in phase changes. Extensively, the use of open cell is beneficial to assist in-situ XRD to reveal ORR mechanism. Also, for in-situ XAS analysis, the typical cell design has been reported by Erickson et al. [155] in the early stages. To ensure in-situ XAS test during ORR of fuel cell under high working current density, they developed an in-situ electrochemical cell with a thin polydimethylsiloxane (PDMS) window and a high oxygen flux WE. Concretely, the core PDMS pouch was prepared as followed (Fig. 16b): two micro-structured PDMS were thermally pressed on both sides of the Nafion membrane to form a PDMS membrane, accompany with a sacrificial bulking layer forming, and was peeled off subsequently, finally, the pouch was made by cutting away the top of this PDMS membrane. Notably, the microstructure of PDMS, with a pillared arcade, ensured the proper distance among PDMS and Nafion membrane (Fig. 16b, enlarged part). The assembly of in-situ cell was as followed (Fig. 16c); the WE and CE were separated to both chambers of the PDMS pouch. Then, two Teflon tubes were connected to each chamber to supply electrolyte. Next, a syringe without the plunger, as the RE reservoir, was connected to 1 3 electrolyte by one tube. Finally, the cell was fixed by Teflon compression plate also with a transmissive window. Notably, the PDMS pouch was much thin, being able to ensure the ample oxygen flux to WE (Fig. 16c, optical photo). In-situ XAS testing of Pt-catalyzed ORR showed that this in-situ cell ensured accurate monitoring of the reaction, and proved that the electronic structure evolution of the metal (Pt) cluster was in dependent on potential and current. Thus, this work offers a serious reference for the prepared of the core part and assemble of the cell. It also proves that the rational design of in-situ cell can ensure that in-situ monitoring of the electronic structure of the catalyst surface is performed under operational conditions. It can ensure more accurate results of the actual changes in the ORR process and facilitate the revelation of the ORR mechanism. For surface testing techniques, such as in-situ Raman and FT-IR, considering that they can only capture the atomic layer signals on the catalyst surface and cannot detect the deeper signals, it is more necessary to design reasonable in-situ cells to assist the testing. The following takes Raman analysis as an example. To probe changes in the structure of the active site of a typical Fe-N-C catalyst, and reveal the ORR mechanism, Wei et al. [141] coupled a self-modified Raman cell with in-situ Raman test to obtain a more accurate monitoring of intermediates for reverse revelation. In detail (Fig. 16d), a three-electrode (including WE, CE, and RE) epoxy pool cell, assembled with a thin sapphire window (0.5 mm), and linked with the potentiostat, was as in-situ modified cell. Notably, the distance among thin window and WE surface was controlled to within 0.1 mm, ensuring that the solution layer between them was as thin as possible, thus ensuring that the weakening effect of thin layer on the Raman signal was minimized. Moreover, a notch filter (532 nm) was added in front of the receiver to cancel the laser beam as the backward scattered light returned. Subsequent in-situ Raman tests in the modified cell during ORR in PEMFC showed that for Fe-N-C catalyst, expect for Fe-Nx site, two C-N sites (graphitic/pyridinic N) were identified as independent active sites, and with *O 2− and *OOH as adsorption intermediates, respectively. Thus, this work provides guidance for the improvement of the Raman cell by focusing not only on in-situ cell itself, but also on the filtering of the laser. It also proves that the design of a rational in-situ Raman cell can make the decoupling of the multiple coexisting active sites become possible and help ensure the accurate identification of intermediates to confirm the ORR mechanism. For catalyst morphology evolution, two in-situ techniques occupy the dominant position. One is in-situ TEM testing, more studies focus on the real-time monitoring of in-situ heating synthesis process (like discussed in Sect. 4.3) to guide the synthesis of high activity ORR catalyst, and the design of in-situ heating cell is more mature (such as in-situ heating holder). This does not mean that in-situ monitoring during electrochemistry process is not important, contrarily, this monitoring is essential to reveal the relationship between the structural stability of the catalyst and elucidate the ORR mechanism. Beermann et al. [156] coupled an in-situ TEM with in-situ electrochemical liquid cell to monitor varies (i.e., degradation pathways) that occurred on the carbonsupported octahedral Pt-Ni alloy nanoparticle (Pt-Ni/C NP) catalyst during ORR in PEMFC. Specifically, this in-situ cell was a flow cell chip based Proto-chips Poseidon holder, and the chip integrated WE, CE, RE and electrolyte to ensure operating conditions, also with a silicon nitride window and a thin liquid layer to ensure imaging (Fig. 16e). Such cell could help track the changes occurred on the Pt-Ni/C NP surface in a few seconds under standard or extreme cycling, further accurately relating the respond of microstructure to the applied potential. The test results showed that there was no dissolution of the Ni in the octahedral Pt-Ni alloy under cycling up to 1.0 V, even holding stable at 1.2 V, eventually promptly coarsening at 1.4 V. Additionally, carbon corrosion accelerated the migration and agglomeration of Pt-Ni NP along the octahedral (111) crystal plane. Based on their observations in this in-situ cell, it was necessary to construct stable carbon carrier and alloy to solve the catalyst degradation problem, also, the control of reaction conditions, such as potential, was also important. Thus, it is essential to design a reasonable in-situ electrochemistry cell to ensure the realtime monitoring of the catalyst degradation mechanism, thereby guiding its structure design. Another in-situ imaging test, i.e., AFM, for metal-air/ O 2 battery to monitor the phase evolution of intermediates/ products, is only force-based and requires relatively simple cell to assist test. To observe the reaction details of the ORR process on the air electrode (with a highly oriented pyrolytic graphite, i.e., HOPG, as the catalyst), Wen et al. [157] designed a simple cell to aid in-situ AFM test during ORR in Li-O 2 battery. Precisely (Fig. 16f), the cell contained HOPG as WE, Li wire as both CE and RE, also 1 3 with O 2 -saturated organic electrolyte. Notably, the HOPG as the catalyst underwent a series of cleaning treatments, such as heat-treated, up-layer stripping, to provide conditions for better phase evolution detecting. In addition, the Li wire was suspended along the Pt wire and the inner cell wall to shun contact with the insulated AFM tip (made of triangulated silicon nitride). Based on this self-made cell, the results offered by in-situ AFM showed that the formed nanoparticles were observed to grow quickly at the HOPG step edge, and formed nanoplates then grown continually to large size, and finally grown to form a Li 2 O 2 film, during ORR. Thus, this work proves that the couple of in-situ AFM and modified cell can ensure to provide the visual evidence of the direct observation for the details of intermediates/ products evolution. This actual obtained information can be used to clarify the ORR mechanism. It also offers a guidance that keeping the distance of the electrode and AFM tip is the key skill for the design of in-situ AFM cell. It is worth noting that rotating disk electrode (RDE), as a typical method to reduce mass transfer effect, is often used in the study of ORR which is greatly affected by mass transfer. Thus, in addition to the critical importance of various in-situ cell development, the concatenation of RDE is also important for in-situ characterization of ORR. For example, Sengupta et al. [139] used in-situ SERRS coupled with RDE ( Fig. 16g) to identify intermediates and investigate the ORR mechanism with iron porphyrin as the catalyst. Taking iron α 4-tetra-2-(4-carboxymethyl-1,2,3-triazolyl)-phenyl porphyrin (FeEs 4 ) as an example, the differential spectrum of FeEs 4 indicated that, in the low-frequency region (Fig. 16h), there were observed with several peaks; thereinto, the 830, 782, and 631 cm −1 peaks all with notable increased intensities as the potential decreasing. After the experiments in buffers of 16 (Fig. 16i), there were observed with two class peaks, i.e., ν 4 and ν 2 vibrations, and the HS Fe II -OH 2 (1352 and 1540 cm −1 ), LS Fe III -OOH (1369 and 1565 cm −1 ), and Fe IV =O (1371 and 1571 cm −1 ) all with increased intensities, while HS Fe III -OH (1364 and 1555 cm −1 ) with decreased intensity, as the potential decreasing. The above in-situ SERRS-RDE results revealed that, from kinetic control to mass transfer control region, HS Fe II -OH 2 , LS Fe III -OOH, and Fe IV =O all gradually accumulated at the electrode, with at the cost of HS Fe III -OH. Therefore, in the kinetic region ensured by RDE, the ORR mechanism and the conversion of Fe was as follows (Fig. 16j): as HS Fe III -OH largely reduced to HS Fe II -OH 2 , the O 2 derivative species rapidly decreased, (i → ii, rate-determining step), then, the further combination with O 2 and gradually with H + /e − all occurred slowly, proving by the slow decay of HS Fe II -OH 2 (ii → iii), LS Fe III -OOH (iv → v) and Fe IV =O (v → i). The application of two combined techniques ensures the direct recognition of the dynamic changes of O 2 derivative species on the surface of iron porphyrin-like electrode during ORR. It also provides a new idea for the combined application of insitu characterization technique and electrochemical testing technique. Rotating ring disc electrode (RRDE), with an additional Pt ring electrode than RDE, is also always used in the research of ORR. Recently, Ni et al. [142] also coupled a RRDE with in-situ Mössbauer spectroscopy to reveal the relationship of the active sites and the ORR pathway in PEMFC, with porphyrin-based FeNC as the catalyst. Thereinto, the Pt ring was mainly used for detecting the formation of H 2 O 2 . The results showed that, for the D3 (N-FeN 4 C 10 ) active site, it appeared at 0.8 V, and mainly contributed to the direct oxygen reduction, i.e., four-electron pathway, with only little contribution to the formation of H 2 O 2 . For the D2 (FeN 4 C 10 ) site, it emerged at 0.6 V or lower potential, and mainly devoted to the indirect oxygen reduction, i.e., peroxide reduction reaction (PRR). Thus, the "start potential" of the two Fe active site were different, and were related to different catalytic pathways, in which the pathway was indirectly revealed by the identification of the H 2 O 2 through RRDE. This further strengthens our awareness of combining in-situ characterization with electrochemical testing. The coupling strategies help to ensure any in-situ technique to reveal the actual ORR process, thereby boosting the clarification of the ORR mechanism. This part further confirms that to accurately monitor any changes in real time during ORR, the construction of in-situ test system is essential. Notably, in liquid solutions, more optical spectra can be used to reflex the signals from the molecular level, while, there still exists difficulties for electron microscopy to visualize the changes (such as migration and agglomeration) at the atomic level. Thus, it is still urgent to explore and develop novel in-situ cell for the electron microscopy. In addition, more coupling techniques, except for the above mentions, should receive attention and the technical difficulty that needs to be broken is the modification of the laser beam channel. In conclusion, the application of in-situ optical techniques (IR, Raman), electron technique (TEM), and scanning probe technique (AFM) for probing the evolution states of intermediates and products are summarized (Fig. 1). These are helpful to clarify the ORR mechanism, and can help to indirectly reveal the active site of catalyst for guiding the synthesis of the catalyst. In addition, in-situ IR/Raman monitoring of the anion chemisorption and in-situ ETS for halides are conducive to provide a realistic in-situ detection of external species on the catalyst surface under actual operating conditions. These external species tend to occupy the active sites of catalysts affecting their activity; thus, their accurate detection is beneficial to guide the structural design of antitoxic catalysts. The combination of theoretical calculations to assign the detected in-situ signals, and associating hardware development to ensure the accuracy of experimental detection, are also proved to be non-negligible. Conclusions and Perspectives This paper furnishes an overview of the applications of various in-situ characterization techniques in probing active sites of catalysts and revealing ORR mechanisms. In detail, the direct detection progress of catalyst structure evolution is outlined; the adsorption/desorption behaviors of intermediates/solvent anions, the formation and evolution of products are summarized; how to combine theoretical calculations to assist in assigning in-situ signals is discussed; other factors affecting the accuracy of the characterization results are also summarized, such as the designing of in-situ cells and the coupling of various techniques. The conclusion, focusing on Pt-based, M-N-C and some oxide catalysts, can be categorized into two points (Fig. 5). For one thing, the phase, valence, electronic transfer, coordination, and spin states varies of the catalysts during ORR can be directly characterized by in-situ XRD, HRTEM, XAS, SECM, and Mössbauer spectroscopy. These can help to identify the active site of the catalyst, and clarify the factors that enhance catalyst activity to guide how to design optimal catalyst structure. The extensive in-situ works have further demonstrated that catalysts synthesized with specific morphologies, high index crystal planes, vacancies, etc. tend to exhibit high oxygen reduction activity. Prominently, the use of in-situ TEM to monitor the morphological changes in the catalyst synthesis process is beneficial in finding the right temperature time at source for the synthesis of highly active catalysts. For another, in-situ detecting of intermediates using in-situ FT-IR, Raman, ETS, and in-situ detecting of products using in-situ XRD, TEM, and AFM can also indirectly achieve the above purpose. The determination of evolution states of intermediates and products facilitates the clarification of reaction pathways and reveals reaction mechanisms. Notably, the catalysts with multi-dimensional morphologies can increase the active sites utilization and promote the adsorption and desorption of intermediates and products on their surfaces, facilitating a more complete conversion of O 2 along the 4e pathway. These results point to the fact that each technique has its own unique detection characteristics and should be used according to different needs. In addition, there are still many challenges and opportunities for future research in ORR in-situ characterization. There are several recommendations for the development of in-situ research on questing ORR processes as follows (Fig. 17): 1. Further improvement of in-situ observation techniques to directly observe the active site evolutions under dynamic changes. Massive studies have illustrated that the structure of the catalyst is always reconfigured and the atomic coordination environment is altered during ORR, especially under real or high potential conditions, which further induce the dynamic evolution of the active site. To identify the active site, it is necessary to monitor the catalyst structure evolution by insitu techniques. However, both the beam damage and the requirement of sample transparency limit the application of electron beam or low-energy photon techniques in the ORR process. High-speed and high-resolution diffraction tests should be developed and used to track the rapid reactions, and then fine structural parameters extracted from the obtained data can directly demonstrate the evolution of the active site during ORR. In addition, it will be a relative long time for the activation and deactivation of catalysts. Currently, the dedicated beamlines of some techniques (such as synchrotron radiation technique) are not applicable, so it is particularly important to find alternatives. 2. Further monitoring or visualizing variations of reaction intermediates coupled with multinomial techniques or modified cells. 3 The real-time monitoring of the ORR process can be crucial to approval the intuitionistic observation of the transient transformation of reaction intermediate configurations further to illustrate the catalytic mechanism. Unfortunately, the coinstantaneous monitoring of all variations of the intermediates during ORR remains a challenge due to the connatural localizations of the single test technique. Thus, the exploitation of devices integrated with in-situ techniques (such as in-situ SERRS coupled with RDE) will undoubtedly play an assignment in better understanding the ORR process. And the coupling of various in-situ techniques is also important to offer more information about the intermediate structure. Furthermore, for in-situ technology using an electron beam, it is not often acceptable to use a reaction cell containing large amounts of solution, because the solvent medium will have a strong absorption effect on the transmitted/scattered X-rays. The researcher should design the reaction cells based on the characteristics of in-situ X-rays techniques to ensure the veracity of experimental data while saving beam time. In particular, the reasonable design of reaction cells for in-situ TEM can ensure the visualizing of dynamic changes of the intermediates formed at the active sites during ORR under more realistic conditions in real time. 3. Further theoretical analysis by the integration of the insight from theory into the various in-situ techniques. Incorporating in-situ monitoring process with the theoretical calculations is an efficient strategy for identifying active sites and uncovering ORR mechanisms. It is an inevitable trend to develop a variety of computational models to predict the structural evolution of catalysts during ORR. Especially, under extreme conditions (such as high temperature and pressure), it is difficult to realize in-situ characterization of catalyst structural evolution using experimental techniques. Meanwhile, the construction of a computational model with optimized reaction parameters to simulate the entire ORR process appears particularly important. It can help to predict the related structural evolution of the catalyst and infer ORR mechanism from theoretical analysis. Moreover, it is equally significant to continue model optimization to ensure the integration of theoretical calculations with different experimental techniques. It will simplify the process of mechanism research and allows one to rapidly construct a highly efficient catalyst. 4. Explore a new systems analysis engineering paradigm to guide the design of catalysts to advance their industrial ORR applications. Many novel structures have been devised and utilized to regulate the intrinsic activity of Pt-based, M-N-C, oxide catalysts, yet the true structures of their active moieties and the structure-effect relationships remain fuzziness. Exploring a new systems analysis engineering paradigm that combines high-throughput synthesis with theoretical calculations and in-situ databases is central to elucidating their ORR catalytic processes. As a proof of concept, the promising single particle catalyst, with enhanced activity through the modification of each unique facets and their interfaces [158], always occurs dynamically restructuring and multiple evolutions during ORR, thus urgently requiring in-situ techniques to monitor them in real time. More mature studies on spectroscopic testing can reflect the average structural information of each single particle, but individual facets are still difficult to discern limited by resolution. Recent developments of in-situ field electron microscopy (FEM) technique [159] is expected to be implemented on single particle level characterization to enrich the databases. With the integration of various in-situ technologies, the massive databases can be created to accelerate the search speed of appropriate single particle active structures for industrial ORR applications. Further combing with theoretical calculations can provide more insight into the precise electronic environment information on the minimum catalytic block of the single particle catalyst from atomic levels. Finally, the calculating simulations should be associated with industrial standards to assist in the devise of superior ORR catalysts from the source to meet the actual industrialization.
2022-12-31T16:07:02.862Z
2022-12-29T00:00:00.000
{ "year": 2022, "sha1": "a422c0d51c6a1fed32adf4d5fabd0f556171bd47", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "5f8dc7fb1d213c932d561ecf76bb360f00b04195", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
235382136
pes2o/s2orc
v3-fos-license
Protein purification and crystallization of HLA-A∗02:01 in complex with SARS-CoV-2 peptides Summary Understanding T-cell responses requires identifying viral peptides presented by human leukocyte antigens (HLAs). X-ray crystallography can be used to visualize their presentation. This protocol describes the expression, purification, and crystallization of HLA-A∗02:01, one of the most frequent HLA in the global population in complex with peptides derived from the SARS-CoV-2 nucleocapsid protein. This protocol can be applied to different HLA class I molecules bound to other peptides. For complete details on the use and execution of this protocol, please refer to Szeto et al. (2021). Before You Begin This protocol can be applied with a variety of different peptides. It has been optimized for HLA-A*02:01 molecule but it has been used for different HLA-class I molecules by optimizing the refolding time (see below) depending on the HLA protein of interest. Here, we describe the complex formation with a SARS-CoV-2 Nucleocapsid protein-derived peptide N 222-230 LLLDRLNQL. Other peptides can be substituted but the conditions may vary as described in Szeto C, et al., 2021(iScience, doi: 10.1016/j.isci.2021 where five different SARS-CoV-2 peptides have been used. Timing: 6 hours Note: A DNA plasmid (here, pET-30a(+) vector, but any vector for protein expression in bacterial system can be used) encoding the heavy chain of the HLA-class I molecule of interest, here HLA-A*02:01, and a plasmid encoding human β2-microglobulin need to be available and be kept at -20 o C. Note: The peptide of interest, here peptides derived from SARS-CoV-2 Nucleocapsid protein, needs to be at least 95% pure, usually as powder kept at -20 o C. Note: When working with bacterial cultures make sure everything is sterile. Clean everything (bench, pipettes, etc.) with 70% EtOH, and work in aseptic conditions (under flame) to avoid any contamination. Note: For the protein expression, different bacterial transformation methods can be also used, such as electroporation, with the use of electrocompetent E. coli protein expression strains instead. Step-by-Step Method Details Expression of HLA-A*02:01 and human β2-microglobulin & extraction of insoluble proteins (Inclusion Bodies) Timing: 5 days This step describes the expression of HLA-A*02:01 heavy chain and human β2-microglobulin proteins as well as their recovery from inclusion bodies from Escherichia coli expression using a mild solubilization process. This protocol can be applied to a variety of different HLA-class I molecules. Note: The gene sequence of β2-microglobulin (sequence below) was inserted in pET-30a (+) that was cut with NdeI/HindIII restriction enzymes IQRTPKIQVYSRHPAENGKSNFLNCYVSGFHPSDIEVDLLKNGERIEKVEHSDLSFSKDWSFYLLYYTEFTPTEKDEYAC RVNHVTLSQPKIVKWDRDM The gene sequence of HLA-A*02:01 heavy chain (sequence below) was inserted in the pET-30a (+) vector that was cut with NdeI/HindIII restriction enzymes. GSHSMRYFFTSVSRPGRGEPRFIAVGYVDDTQFVRFDSDAASQRMEPRAPWIEQEGPEYWDGETRKVKAHSQTHR VDLGTLRGYYNQSEAGSHTVQRMYGCDVGSDWRFLRGYHQYAYDGKDYIALKEDLRSWTAADMAAQTTKHKWE AAHVAEQLRAYLEGTCVEWLRRYLENGKETLQRTDAPKTHMTHHAVSDHEATLRCWALSFYPAEITLTWQRDGED QTQDTELVETRPAGDGTFQKWAAVVVPSGQEQRYTCHVQHEGLPKPLTLRWEPSS Both sequences and successful sub-cloning into pET30a vector have been confirmed by sequencing by Genscript. J o u r n a l P r e -p r o o f i. Thaw competent cells on ice. Note: competent cells are very "fragile", do not warm up the cells by holding the bottom of the tube in your fingers. Note: work under a flame, or in a fume hood, to maintain aseptic conditions. Alternatively, if not applicable, make sure that everything is sterile by cleaning bench, pipettes, etc. with 70 % EtOH to avoid any contamination. ii. Add 0.5-1 μL (<100 ng) of plasmid to the competent cells. Note: Use 50-100 ng of plasmid. The volume of plasmid added to the competent cells shouldn't exceed 10 % of the final volume as the H 2 O may cause lysis of the cells. iii. Incubate cells-DNA mixture on ice for 20 mins. iv. Heat shock competent cells at 42°C in a water bath (or heat block) for 45 seconds. v. Note: The volume of bacterial cultures determines the final protein yield and can be adjusted to your needs Note: When growing bacterial cultures, it is important not to let the OD get any higher than 0.6. The OD should be carefully monitored and checked often, especially when it gets above 0.2, as the cell growth is exponential. Harvest cells by centrifugation at 5000x g for 15 mins at 4 o C. Resuspend cell pellets in MilliQ H 2 O (20 mL in total for the entire 6x 800 mL culture). We recommend using 50 mL falcon tubes for easy storage and manipulation. iv. Store cell pellet in the -80°C (long term storage) or -20°C (short term storage) freezer. Pause Point: You can either proceed to the preparation of insoluble proteins (Inclusion bodies) or store the cell pellets as described above. Alternatives: Other methods can also be used for the lysis of bacterial cells (e.g. French press, sonication). Homogenise and centrifuge at 10,000 g at 4˚C for 10 min. iv. Repeat i-iii until pellet is clean of dark debris (minimum of 4 washes). v. Discard supernatant and resuspend pellet in 150 mL of Wash Buffer 2. vi. Homogenise and centrifuge at 10,000 g at 4˚C for 10 min. viii. Discard supernatant and resuspend pellet in 5 mL Guanidine·HCl Buffer for 6x 800mL culture (1 mL/L of culture), on a rotating wheel for 14-17 hours at 4˚C. Note: If the mixture turns into a jelly-like consistency, add in 10 mM DTT. Note: For problems that may arise during this procedure please refer to Troubleshooting Problems 1 and 2 below. Note: The molecular mass of HLA α-chain is ~32 kDa and human β-2-microglobulin is ~10 kDa. Aliquot them out to:  10 mg for human β2-microglobulin  30 mg for HLA-A*02:01 heavy chain ii. Store in -80˚C or proceed to the next step (Refolding). Pause Point: You can proceed to the refolding steps below or store the inclusion bodies long term at -80 o C. Refolding of the HLA-class I molecule with human β2-microglobulin and the peptide of interest (SARS-CoV-2 N 222-230 LLLDRLNQL) followed by dialysis Timing: 1.5 days The refolding step produces soluble peptide-HLA complexes. During this step, the pHLA complex will be formed, which is composed of the HLA-A*02:01 heavy chain, human β2-microglobulin, and the peptide of interest, in this case the SARS-CoV-2 N 222-230 LLLDRLNQL peptide. a. Prepare, wash and equilibrate the DEAE-C resin as described in the "Before You Begin". b. Filter dialyzed protein sample using a 0.22 μm syringe filter and load onto the column at 4°C. c. Elute bound proteins with 5 CV (column volume) of 10 mM Tris-HCl pH 8.0, 150 mM NaCl. d. Visualise your protein purity on SDS-PAGE. e. DEAE resin can be washed with 1 M NaCl followed by 10 mM Tris-HCl pH 8.0 and stored in 20 % EtOH to be reused at 4°C. DAY 7, Timing: 3 hours 6. Ion exchange chromatography -HiTrap Q HP, 1mL a. Concentrate the eluted protein using a centrifugal concentrator with a 10 kDa cut-off and buffer exchange over 10 mM Tris-HCl pH 8 four times. Critical: This step is crucial as NaCl needs to be removed from the sample before loading it onto the HTQ column. Note: Protein of interest must be filtered with 0.22 μm filter before loading onto the column to remove particulate material. The starting volume of the sample should be less than 1 mL for the HTQ column and diluted in equilibration buffer (see below) to less than 5 mL (5 fold dilution). 5 mL loop should be used on FPLC machine for injection onto the HTQ column. Check the pressure limit for the column and set this parameter on the FPLC. If sample volume is higher than the volume of the loop multiple injections can be done. b. Connect the column on the FPLC with desired buffer (Pump A: 10 mM Tris-HCl pH 8.0). c. Wash the column with five column volumes of 10 mM Tris-HCl pH 8.0, 1 M NaCl (Pump B). d. Equilibrate the column with five column volumes of 10 mM Tris-HCl pH 8.0 (Pump A). e. Start fractionation before protein injection. Ensure the deep well or tubes are at the right position. Collect 1 mL fractions. f. Inject protein sample and run at a flow rate of 1 mL/min. g. After two loop volumes (10 mL) set a gradient from 0 to 20 % of 10 mM Tris-HCl pH 8.0, 1 M NaCl (Pump B) over 20 mins. h. Usually, peptide-HLA complex elute at ~15 % of 10 mM Tris-HCl pH 8.0, 1 M NaCl although this can vary between different proteins. i. Run SDS-PAGE gel of the fractions of interest. j. Pool the fractions needed based on purification profile and gel analysis. k. Concentrate the protein sample using a 10 kDa cut-off concentrator and measure the concentration of protein using a UV spectrophotometer at 280 nm absorbance. The protein sample needs to be concentrated to the desired concentration for crystallization. The HLA-A*02:01-SARS-CoV-2 N 222-230 LLLDRLNQL complex was concentrated at 5 mg/mL to set up crystal trays. J o u r n a l P r e -p r o o f Note: The protein sample can be stored at 4 o C short term or at -20 o C long term. However, it is highly recommended to be used straight away to set up trays. Note: For problems that may arise during this procedure please refer to Troubleshooting Problems 3 and 4 below. Crystallization of the HLA-A*02:01-SARS-CoV-2 N 222-230 LLLDRLNQL complex Timing: time varies for crystal formation (hours to months) This step describes the crystallization process of the HLA-A*02:01-N 222-230 LLLDRLNQL (pHLA) complex. The condition used is: 20 % PEG3350 w/v, 0.2 M Potassium Formate, 1mM CaCl 2 , via sitting-drop, vapour diffusion at 20˚C with a protein: reservoir drop ratio of 1:1, at a protein concentration of 5 mg/mL in 10 mM Tris-HCl pH 8.0, 150 mM NaCl (Szeto et al., 2021). This process can be applied for a variety of different proteins/ peptides/ protein complexes, however, many factors such as the crystallization condition, protein concentration, use of seeds and additives, can vary significantly. Note: The purity of the protein sample should be evaluated prior to crystallization. For the initial crystallization screening, the protein sample should be at least 95 % pure on SDS-PAGE. It is recommended, if possible, to also evaluate the homogeneity and monodispersity of the protein sample as both factors can significantly influence the crystallization outcome. Critical: Avoid freeze/thawing protein samples multiple times and try to use freshly purified protein for crystallisation. Try to set up the trays within a week after the purification or pass your protein sample through a HiTrap Q HP column again to remove any unfolded material prior to crystallisation trial set up. Timing: 2 hours 7. Set up sitting drop 48-well-plates according to or using reagent formulations from Hampton Research commercial screening kits. The kit used here is the PEG/Ion HT Crystallization reagent kit. a. Prepare approximately 50 μL (per plate to be set up) of pHLA-A*02:01 peptide complex with concentration at approximately 5 mg/mL in the buffer used for protein purification (10 mM Tris-HCl pH 8.0, 150 mM NaCl). Note: The concentration of the protein needed to form crystals can vary. b. Pipette 100 μL of crystallisation solution into the deep well c. Pipette 1 μL of crystallisation solution from each deep well onto the well, followed by 1 μL of the protein sample d. Seal the plate with Crystal Clear plate Sealing Tape and store the plate at 20C Note: Alternatively, hanging drop plates can be used with silicon glass cover slides. e. Wait until crystals appear Note: Crystallization trial is a multi-factor process, and not all factors are under your control. Please see below Troubleshooting Problem 5 for some alternative tips that may facilitate the process. Timing: 2 hours J o u r n a l P r e -p r o o f Optimization Depending on the crystal quality such as size, shape, and diffraction quality, optimization steps may be needed. Here we optimized the initial conditions that crystals were grown in by using the Additive Screen HT and micro seeds from the drop that gave crystals. a. Set up sitting drop 48-well-plates using the condition that produced crystals. b. Pipette 1 μL of the crystallisation solution from each deep well onto the well, followed by 1 μL of the protein. Note: The concentration of the protein may need to be adjusted depending on the initial crystals. For example, if there are multiple clusters of crystals in the drop, or high precipitation, the protein concentration may need to be decreased. c. Add 0.2 μL of additive included in the Additive Screen HT in your drop (protein and crystallization solution), in this case 1mM CaCl 2 was used. d. Add crystal micro seeds if initial crystals have been obtained. Crush up the seeds finely and add 0.2 μL of crushed seeds into each drop. e. Seal the plate with Crystal Clear plate Sealing Tape and store the plate at 20 o C. f. Wait until crystals appear. Expected Outcomes The yield of Inclusion Bodies for HLA-A*02:01 heavy chain is typically 125 mg/ L of bacterial culture, and for human β2-microglobulin the typical yield is 48 mg/ L of bacterial culture. The purity of the Inclusion Bodies can be increased by increasing the number of washes with Inclusion Bodies Wash Buffer 1 (Figure 1). The final yield of pHLA is expected to be 0.5 -5 mg, depending on the peptide and the HLA molecule. After the first purification step, using the DEAE-C resin, the purity of the pHLA complex is expected to be > 80 % as shown in Figure 2. During the second purification step, the pHLA complex elutes in approximately 15 mS/cm in the HiTrap Q HP column (Figure 3). The purity of the complex is expected to be > 90 % as shown in Figure 4. Crystals of the pHLA complex were grown in 20 % PEG3350 w/v, 0.2 M Potassium Formate, 1mM CaCl 2 ( Figure 5A) via sitting-drop, vapour diffusion at 20˚C with a protein: reservoir drop ratio of 1:1, at a protein concentration of 5 mg/mL in 10 mM Tris-HCl pH 8.0, 150 mM NaCl (Szeto et al., 2021). Crystals appeared in 48 hours. Optimization trials using micro-seeds from the drop shown in Figure 5A led to bigger crystals, as shown in Figure 5B, that appeared in 24 hours. Limitations Limitations of the protocol are mentioned in the Troubleshooting section below. Potential Solution: Firstly, check that the speed and time for centrifugation is adequate. We recommend using 250 mL centrifuge bottles to hold the 150 mL solutions of wash buffer. Secondly, the highly viscous consistency can be caused by excess nucleic acids so ensure that you have added the appropriate amount of DNase. Thirdly, check that the E. coli cells are resuspended in solution. If cells are not completely lysed, the wash buffer solutions can permeabilise cell membranes leading to spillage and contamination of inclusion bodies with nucleic acids and cellular debris. If enzymatic lysis is insufficient, resuspend the cell pellet in lysis buffer, homogenise the cell pellet and allow additional time (30 minutes -1 hour) for enzymatic lysis to occur. If this is still insufficient, consider using a French Press or sonication to fully lyse cells before proceeding to inclusion body washing. Problem 2: Inclusion bodies resuspended in 6 M Guanidium HCl turned into a jelly-like consistency after the 14-17 hours incubation (Step 2 -Preparation and extraction of insoluble proteins). Potential Solution: This viscosity is caused by reformation of disulphide bonds from denatured proteins. Adding in small amounts of DTT (20 mM DTT) and allowing the mixture to continue rotating at ~25 o C will re-solubilize the protein. Potential Solution: A number of HLAs including HLA-A*02:01 and the mouse MHC H-2Db elute with shoulder peaks (before or after the main peak). This could be due to oligomerisation at high protein concentrations that reflect multiple isoelectric points and therefore elution of proteins at different salt concentrations. It is always important to check both peaks on an SDS-PAGE to confirm purity of the refolded pHLA complex. If the gel shows contamination with other proteins, size-exclusion chromatography should be performed to further purify the protein sample. Remember that if the pHLA sample is used for crystallisation trial, the purity of the protein is critical to enable crystal formation. Potential Solution: Multiple factors affect the crystallization procedure including the solubility of the protein in certain buffers, the choice of the right precipitant, the pH of the solution, and the temperature (20 o C, 16 o C, or 4 o C). Instead of setting up trays manually, fast screening of crystallization conditions using sitting drop techniques in 96-well plates with sealing film can be used instead by using commercially available sparse matrix or grid screen solution kits (Jancarik et al., 1991;Cudney et al., 1994;Tran et al., 2004) (plates and kits available at: Emerald Biostructures Inc.; Hampton Research; Molecular Dimensions; Qiagen-Nextal Biotechnology, Canada; Jena Bioscience, Germany; Axygen Biosciences), following the order: (i) sparse matrix, (ii) matrix screen PEG/ions, (iii) grid screen ammonium sulfate, (iv) grid screen PEGs, (v) grid screen PEG/LiCl, (vi) grid screen alcohols, (vii) grid screen salts. Shake crystallization solutions (i.e., sparse matrix or grid screen kit solutions) thoroughly before use. Dispense 100 μL of crystallization solution in the deep well and drops consisting of 0.5-1 ul of protein solution + 0.5-1 μl of crystallization solution (equal amounts). Note: Pipette the crystallization solution into the protein solution and let the two diffuse together without mixing. Once taken from the refrigerator, the screening solutions should be kept approximately 30 min at ~25 o C to allow equilibration. This procedure facilitates the finding of the condition where the protein forms crystals. Using this condition, optimization steps may be needed in order to obtain good quality crystals, such as the use of seeds and/ or additives, as well as a range of pH, protein and precipitant concentrations. Resource Availability Lead Contact Further information and requests for resources and reagents should be directed to and will be fulfilled by the Lead Contact, Professor Stephanie Gras, s.gras@latrobe.edu.au Materials Availability This study did not generate new unique reagents. Data and Code Availability This study did not generate/analyze datasets/code. peptide. Elution peak was pooled together and an 8 μL aliquot was taken to run the SDS-PAGE. As the pHLA protein complex is non-covalently linked, on an SDS-PAGE gel, it is shown as two chains: a heavy α chain (~32 kDa) and a β-2-microgolubin chain (~10 kDa). First lane of the gel is the protein molecular weight marker in kDa (MW). Excess of human β2-microglobulin elutes at around 8 mS/cm while pHLA complex elutes at around 12-18 mS/cm. Protein that hasn't refolded is eluted at 100% buffer B (aggregation peaks). The top bar of the graph shows the numbers of the collected fractions (i.e. A/66-A/68). Figure 4. SDS-PAGE performed for Hi Trap Q elution for the HLA-A2 in complex with the SARS-CoV-2 N 222-230 peptide. Elution peak was pooled together and an 8 μL aliquot was taken to run the SDS-PAGE. As pHLA protein complexes are non-covalently linked, on an SDS-PAGE gel, it is shown as two chains: a heavy α chain (~32 kDa) and a β-2-microgolubin chain (~10 kDa). First lane of the gel is the protein molecular weight marker in kDa (MW).
2021-06-10T13:28:35.463Z
2021-06-09T00:00:00.000
{ "year": 2021, "sha1": "2decc3bafd3a10d2fac7fecf03d4e184102ee322", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.xpro.2021.100635", "oa_status": "GOLD", "pdf_src": "ElsevierCorona", "pdf_hash": "2decc3bafd3a10d2fac7fecf03d4e184102ee322", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
221931730
pes2o/s2orc
v3-fos-license
Determination of Doxycycline Hyclate by Batch and Reverse Flow Injection Analysis Based on the Oxidative Coupling Reaction with 3-Methyl-2-benzothiazolinone Hydrazone hydrochloride (MBTH) New, simple and sensitive batch and reverse FIA spectrophotometric methods for the determination of doxycycline hyclate in pure form and in pharmaceutical preparations were proposed. These methods based on oxidative coupling reaction between doxycycline hyclate and 3-methylbenzothiazolinone-2-hydrazone hydrochloride (MBTH) in the presence ammonium ceric sulfate in acidic medium, to form green water-soluble dye that is stable and has a maximum absorbance at 626 nm. A calibration graph shows that a Beer's law is obeyed over the concentration range of 1-80 and 0.5-110 μg.mL -1 of DCH for the batch and rFIA respectively with detection limit of 0.325 μg.mL -1 of DCH for r-FIA methods. All different chemicals and physical experimental parameters affecting the development and stability of the colored product were carefully studied. The proposed methods were successfully applied for the determination of DCH in pharmaceutical preparations. Introduction: Doxycycline hyclate (DCH) is one of the tetracycline derivatives, which has a wide range of antibacterial activities molecular weight 512.95g/mol and structure (Figure 1) given below [1]. It is a broad spectrum antibiotic, with activity against a wide range of grampositive and gram-negative bacteria. It has been used for the treatment of infectious diseases caused by rickettsiae, mycoplasmas and chlamydiae [2]. The drug is official in the United States Pharmacopoeia (U.S.P) [3] and the British Pharmacopoeia (B.P) [4] which describes the HPLC method for the determination of DCH either in raw material or in pharmaceutical formulations. The literature reveals to several methods for the determination of DCH in pharmaceutical dosage forms, including liquid chromatography [5], FIA-spectrophotometry with copper carbonate [6], sequential injection chromatography [7], capillary electrophoresis [8], sodium cobaltnitrite [9], uranyl acetate [10] and also 4aminophenazone /potassiumhexacyanoferrate (III) [11], Chromatographic techniques are the most widely used one. Although these procedures are specific, most of the described methods are time consuming and require multistage extraction procedures [12][13][14]. This work describes spectrophotometric methods (batch and Reverse Flow Injection) for the determination of DCH in pharmaceuticals drugs. The methods were successfully applied to the determination of DCH in three different brands of tablets and capsules with good accuracy, precision. And without detectable interference by standardaddition procedure and the methods were to be simple, accurate and easy to apply to routine analysis. Materials and Methods: Apparatus All spectral and absorbance measurements were carried out by using a shimadzu UV -Visible -260 digital double beam recording spectrophotometer (Tokyo -Japan) and using 1 cm quarts cells. A quartz flow cell with 50 µL internal volume and 1 cm bath length was used for the absorbance measurements. The two channel manifolds were employed for the FIA spectrophotometer determination of VHC. A peristaltic pump (Ismatec Lobortechnik-Analytic, CH-8512, Glatbragg-Zurich, Switzerland, sixchannels) was used to transport the reagent's solutions. Injection valve (Rheodyne,Altex 210, supeko use) was employed to provide appropriate injection volumes of standard and sample solutions. Flexible vinyl tubing of 0.5mm internal diameter was used for the peristaltic pump. Reaction coil (RC) was of Teflon with internal diameter of 0.5 mm. and with length (150 cm), injection loop (100 µL), and total flow rate 2.5 mLmin -1 , the absorbance was measured at 626 nm at temperature 25 °C. Chemicals: Doxycycline stock solutions (500μg.mL -1 ) When dissolved 0.05 g amount of pure DCH (SDI-Iraq) in distilled water then completed to a 100 mL in a volumetric flask with the same solvent the dilute a solution were prepared by suitable dilution of the stock standard solution with distilled water. 3-Methyl-2-Benzthiazolinone hydrochloride (MBTH) 0.2 %( w/v) Accurately weighed 0.2g of MBTH reagent was transferred into a 100 ml calibrated volumetric flask, dissolved in distilled water, and made up the volume to mark to obtain a solution of 0.2% (w/v). General Batch procedure: Into a series of 10 mL volumetric flasks , an increasing volume of doxycycline working solutions (500 μg.mL -1 ) were transferred to cover the range of the calibration graphs (1 -80 μg mL -1 ), and then add of 0.8 mL of MBTH (1%) and 1 mL of (NH 4 ) 4 Ce(SO 4 ) 4 .2 H 2 O (1%) dissolved in 0.1M H 2 SO 4 . The solutions were diluted to develop the color to mark with distilled water, mixed well and left for 15 min at room temperature 25 ºC. The absorbance was measured at 626nm versus the reagent blanks prepared in the same way containing no doxycycline drug. A calibration graph was constructed and regression equations were calculated. General FIA procedure Working solution of DCH in the range (0.5-110 μg mL -1 ) was prepared from stock solution. A 100μL portion of the reagent of MBTH (0.1%) was injected into the stream of the 40 μg mL -1 of DCH and was then combined with a stream of oxidant(Am.C.S. 0.25% in 0.01M H 2 SO 4 )with a total flow rate of 2.5 mL min -1 and the reaction coil of 150cm. The resulting absorbance of the colored dye was measured at maximum wave length. A calibration graphs was prepared over the range cited in (Table 1), Sample preparation Stock solution (500μg.mL -1 ) was prepared daily by dissolving 0.05 g of the pure DCH in 100 mL of distilled water and serial dilutions with distilled water were made. Results and Discussion: The effect factors on the sensitivity and stability of the colored products resulting from the oxidative-coupling reaction of 3-Methyl-2-Benzthiazolinone hydrochloride (MBTH) with DCH in an acidic medium were carefully studied. A typical spectrum for the 40 μg mL -1 of the dye formed was measured versus reagent blank which has negligible absorbance at 626 nm ( Figure 2). (2): Absorption spectra of (40 μg.mL -1 ) DCH treated as described under procedure and measured against reagent blank (MBTH and Am.C.S.) and the reagent blank measured against distilled water. Batch spectrophotometery determinations After fixing the optimum reaction conditions a calibration graph of DCH was prepared according to the following procedure: Into 10 mL standard flasks, an increasing volumes of DCH working solution (500 μg mL -1 ) was transferred to cover the range of the calibration graph (1 -80 μg mL -1 ). A volume of 0.8 mL of MBTH (1%), 1 mL of (NH 4 ) 4 Ce(SO 4 ) 4 .2 H 2 O (1%)dissolved in 0.1M H 2 SO 4 . The contents of the flasks were diluted to mark with distilled water, mixed well and left for 15 min at room temperature 25 ºC. The absorbance was measured at 626 nm against reagent blank containing all materials except DCH. The calibration graph was then prepared by plotting the absorbance versus concentration of DCH and the regression equation was calculated. The analytical values of statistical treatments for the calibration graph are summarized in (Table 1). The stoichiometry of the formed product was studied by continuous variation (Job's method) and mole ratio methods. The job's method was applied by adding decreasing volumes (5 to 0.0 mL) of 3.365 × 10 -4 M of MBTH solution into a series of 10 mL volumetric flasks, followed by adding increasing volumes (0.0 to 5 mL) of DCH of same concentration, 1mL of 1% (NH 4 ) 4 Ce(SO 4 ) 4 . 2 H 2 O dissolved in 0.1% of H 2 SO 4 ,. The solutions were diluted to the mark with distilled water, allowed to stand for 15 min, and the absorbance was measured versus reagent blank. In mole ratio method, an increased volumes (0.25 -2.5 mL) of 3.365 × 10 -4 M of MBTH solution were added to a series of 10 mL volumetric flasks, followed by 1 mL of DCH of same concentration, 1mL of 1% (NH 4 ) 4 Ce(SO 4 ) 4 . 2 H 2 O dissolved in 0.1% of H 2 SO 4 . The volumes were made up to the mark with distilled water and allowed to stand for 15 min. The absorbance was measured at 626 nm versus the reagent blank. The results obtained of both methods were plotted and are shown in (Figure 3) and ( Figure 4) which indicated the existence of 1:1 (MBTH: DCH) and the reaction equation according to scheme-1. FIA determination: The batch method for the determination of DCH was adopted as a The 2 nd National Conference of Chemistry 944 basis to develop rFIA procedure. The manifolds used for the determination of DCH was so designed to provide different reaction conditions for magnifying the absorbance signal generated by the reaction DCH drugs with MBTH in presence of (Am.C.S.) was dissolve in sulfuric acid and also medium. Maximum absorbance intensity was obtained when the reagent was injected into a stream of DCH drug and was combined with a stream of oxidant (Am.C.S.) (0.25% in 0.01M H 2 SO 4 ). The influence of different chemical and physical FIA parameters on the absorbance intensity of the colored product was optimized as follows Chemical variables The effect of concentration of MBTH and Ammonium ceric sulfate The The effect of (NH 4 ) 4 Ce(SO 4 ) 4 The effect of total flow rate The effect of total flow rate was investigated in the range of 0.5 to 4.5 mL min -1 using equal flow rates of the two channels for FIA manifold. The results showed that a total flow rate of 2.5mL.min -1 gave the highest absorbance ( Figure 7) and was used in all subsequent experiments. After this rate, the absorbance of colored product decreased by increasing a total flow rate because the residence time is not enough for the reaction to be completed. A total flow rate of 2.5 mL min -1 was selected as a compromise between sample throughput rate and sensitivity. The effect of reaction coil length The effect of the length of reaction coil was investigated in a range of 25 -250 cm, at a length of 150cm the absorbance reached to maximum for rFIA, (Figure 8), then it decreased as the reaction coil was increase because of the increase in dispersion. Therefore the optimal length for rFIA procedure was 150 cm, and was used in all subsequent experiment. The effect of injected sample volume The volume of the injected sample was varied between 100 and 250 μL using different lengths of sample loop. The results obtained showed that an injected sample of 100 μL gave the best absorbance for rFIA method (Figure 9). The volume of 100 μL was chosen as optimal for subsequent experiments. Accuracy and precision Under the optimum conditions the accuracy and precision for the determination of DCH using reverse FIA methods was studied using three different concentrations of standard DCH. (Table 2) shows E%, Rec. %, and RSD% of five determinations of each DCH for three different concentrations (10, 40, 80 μg.mL -1 ) and satisfactory results were obtained. Pharmaceutical applications The proposed method was applied for the determination of DCH in capsules by the analysis of three concentrations for each sample using the recommended procedure. The results obtained are summarized in (Table 3) and can be considered to be satisfactory. Analytical application In order to evaluate the competence of the proposed methods, the results obtained were compared with those obtained with standard method. The results obtained by the two different methods were statistically compared, using the Student t-test and variance ratio F-test at 95% confidence level. In all cases, the calculated t-and F-values (Table 4) did not exceed the theoretical values, which indicate that there is no significant difference between either methods in accuracy and precision in the determination of DCH pharmaceutical preparations. Conclusion: The application of oxidativecoupling reaction of MBTH in (Am.C.S.) in the spectrophotometeric determinations of the doxycyline hyclate in pharmaceutical preparations was described by batch and rFIA systems. The rFIA system has several advantages over the batch system simplicity, reproducibility time saving, low reagent consumption need of small sample volume, large dynamic range and high sample throughput (72 sample h -1 for DCH) are important features of the rFIA system. The proposed methods offer a good linearity and precision and simple and inexpensive since it requires simple instrumentation. The Pharmacological Basis of
2019-08-18T05:59:03.115Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "48a8c2a8acb6d49a00c46b2cd07b3b71b6179367", "oa_license": "CCBY", "oa_url": "https://doi.org/10.21123/bsj.2016.13.2.2ncc.0489", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "de57349810deb9c81c9bf531c412c0de3033b12d", "s2fieldsofstudy": [ "Chemistry", "Medicine" ], "extfieldsofstudy": [] }
246390554
pes2o/s2orc
v3-fos-license
In-vitro antibacterial activity of Pterolobium stellatum leaves extract against selected standard bacteria Traditional use of herbal medicines implies substantial historical use, and this is certainly true for many products that are available as ‘traditional herbal medicines. The experimental study was conducted between February and May, 2016 at University of Gondar on antibacterial effect of leaf extract of Pterolobium stellatum. The purpose of the present study was to test the antimicrobial effect of P. stellatum extracted leaves against some standard pathogenic bacteria. The collected plant leave sample was extracted with the solvent ethanol, methanol, chloroform and distilled water. Finally, the antibacterial effect of the extract was tested with some bacteria species (Escherichia coli, Pseudomonas species, Salmonella species, Shigella species, Staphylococcus aureus and Streptococcus pyogenes) then the inhibition zone; the minimum inhibitory concentration (MIC) and the minimum bactericidal concentration (MBC) were determined. The extract of ethanol and methanol solvents showed high antibacterial activity on both Gram negative and Gram positive bacteria. The higher and statistically significant (P<0.05) inhibition was seen in ethanol extract for all bacteria and the highest inhibition was shown against Shigella spp. (21.33±1.52) whilst the lower inhibition was statistically significant (P<0.05) with chloroform extract. Both the MIC and MBC of the test extract were effective at the lowest concentration. INTRODUCTION Pathogenic microorganism borne illnesses are a continuous threat to public health (Yashphe et al., 2006;Araujo et al., 2015). Bacterial species present the genetic ability to acquire and transmit resistance against currently available anti-bacterial since there are frequent reports on the isolation of bacteria that are known to be sensitive to routinely used drugs and became multi-resistant to other medications available on the market (Chandra, 2013;Kouadio et al., 2020). Antimicrobial resistance is a challenge of microorganism against an antimicrobial drug *Corresponding author. E-mail: tilah.yo@gmail.com. Tel: +251920255658 or +251904930936. Author(s) agree that this article remain permanently open access under the terms of the Creative Commons Attribution License 4.0 International License efficacy that was originally effective for treatment of infections caused by it (Adesokan et al., 2007;Yhiler et al., 2019). Interest in the use of natural plant-derived products versus chemicals as antimicrobials is increasing significantly (Suppakul et al., 2003;Santos et al., 2016). Further, various bacteria have developed resistance to certain antibiotics, and thus, other forms of bactericidal agents are required (Oussalah et al., 2006;Kadaikunnan et al., 2015). Since the time immemorial medicinal plant as whole or their parts are being used in treating all types' of diseases (Jamuna et al., 2011;Marathe et al., 2013). Medicinal plants are used as natural resources for the treatment of various diseases since a long time ago (Mitiku et al., 2014;Elizabeth et al., 2015) and have been the main source for new drug development (Kumara et al., 2009;Ayandele et al., 2018). The data of plant usage for treatment in many forms provide a major focus in global health care, as well as contributing substantially to the drug development process (Maikai et al., 2009;Mitiku et al., 2014). Medicinal plants contain physiologically active principles that over the years have been exploited in traditional medicine for the treatment of various ailments as they contain anti-microbial properties (Miyakis et al., 2011;Araujo et al., 2015). It is believed that plants which are rich in a wide variety of secondary metabolites, belonging to chemical classes such as tannins, terpenoids, alkaloids, and polyphenols are generally superior in their anti-microbial activities (Pandian et al., 2006;Hemalatha and Dhasarathan, 2010;Marathe et al., 2013). Therefore, the strength of biological activities of a natural product is dependent on the diversity and quantity of its antimicrobial constituents (Cos et al., 2006;Liu, 2006;Elizabeth et al., 2015). Furthermore, natural products, either pure compounds or as standardized plant extracts, provide unlimited opportunities for new drug leads because of the unmatched availability of chemical diversity (Chakraborty, 2009;Meenakumari et al., 2011;Idris and Abubakar, 2016). This has urged microbiologists all over the world for formulation of new antimicrobial agents and evaluation of the efficacy of natural plant products as a substitute for chemical antimicrobial agents (Alikhan et al., 2012;Alfatah et al., 2013;Mimura et al., 2020). Pterolobium stellatum is a tall, scrambling or climbing shrub with woody rope-like stems. Young plants are densely covered with hairs on stem and leaves. The stem has hooking prickles in pairs at the nodes and scattered ones between the nodes. Leaves are compound with 7 to 15 pairs of leaflets. The leaf axis is armed on the lower side with paired reflexed prickles. The flowers are small and sweetly scented with a pale yellowish-white color. The seed pods are broadly winged with a red to scarlet color when young, becoming brown with age (Afolayan and Aliero, 2006). Considering the vast potentiality of plants as sources for antimicrobial drugs with reference to antibacterial agents, a systematic investigation was undertaken to screen the local flora for their antimicrobial activities. Then, this study has investigated in-vitro antibacterial activity of P. stellatum leaves against some bacterial species. Study site description Gondar, the historical town in the country, is located to the Northern and about 747 km far from Addis Ababa, Capital of Ethiopia. Geographically, Gondar is bounded by 12°35' 07'' North latitude and 37°26' 08'' East longitude and its altitude varies in between 2000 and 2200 m above sea level. Gondar has a humid subtropical mild summer climate that is mild with dry winters, mild rainy summers and moderate seasonality. This climate is usually found in the highlands of some tropical countries. According to the Holdridge life zones system of bioclimatic classification Gondar is situated in or near the subtropical dry forest biome. The annual average temperature is 19.1°C. Average monthly temperatures vary by 4°C (7.2°F) (Gondar Agriculture and Rural Development Office). Plant leaves collection and identification The medicinal plant P. stellatum was selected for this study on the base of data obtained from local people and literatures because this plant is traditionally used for wound treatment. Fresh and healthy leaves of P. stellatum were collected from University of Gondar at Atse Tewodros and Maraki campus garden on the winter season. The voucher specimens were identified by Mr. Abiyu Eniyew at University of Gondar, herbarium of botany laboratory and the voucher specimen PLS No. 0113/17 was deposited. Figure 2 shows the plant collected from University of Gondar garden. Preparation of crude extract of plant materials All the necessary chemicals, media and equipment for this study were obtained from Microbiology Laboratory of Department of Biology, University of Gondar. The leaf of the plant was washed with running tap water and finally with sterile distilled water. Then, it was dried in an open air, protected from direct exposure to sunlight, to prevent degradation of active ingredients (Girish and Satish, 2008). The plant material was ground using grinding machine (Kika-werke-GM.BH, Germany) and passed through mesh sieve to obtain a fine powder. From the sieved powder sample 50 g of the extract was mixed with 500 ml, that is, 1:10 ratio, of extracting solvent (Chloroform, Ethanol, Methanol, and Water) then shake with mild shaking for 24 h on a shaker. The extract was filtered using filter paper (Whatman Paper No. 1) and the solvent were evaporated on the rotary evaporator under reduced pressure at 78, 61, 65 and 93°C, respectively. The extracts were dried at room temperature (Farrukh et al., 2010). Standard antibiotics Gentamicin obtained from University of Gondar teaching hospital pharmacy was used as positive control and distilled water, chloroform, ethanol and methanol solvents were used as negative controls for the anti-bacterial susceptibility test. Since all of the negative controls had 0.00 inhibition zone, all presented as negative control. Test bacterial strains Standard Staphylococcus aureus, Escherichia coli, Shigella species, Pseudomonas species, Salmonella species, and Streptococcus pyogenes species were obtained from University of Gondar teaching referral hospital. These bacterial strains are isolated and laboratorically identified culture collection for test and researches in the institution. Since all test strains are clinically isolated and identified; they have no identifying codes, that is, ATTCC strains. Preparation of McFarland and turbidity standard for inoculation Standardization of the density of isolated inoculums for susceptibility test was done by the methods described in Erturk (2006). In order to determine the active place of test organisms, each isolates was grown in 5 ml of Muller-Hinton broth (MHB) in separate test tube for each bacterial strain for 24 h in incubator. Samples from exponential phase were taken to adjust the inoculums density with 0.5 McFarland and Turbidity prepared by adding a 0.5 ml of BaCl 2 solution into 99 to 95 ml of H 2 SO 4 (Erturk, 2006). The turbidity of the inoculums was adjusted. Preparation of culture media Muller Hinton agar (MHA) media was used for sensitivity. The media was prepared and treated according to manufactures guidelines. 38 g of MHA was mixed with 1 L of distilled water and settled in hot plate then autoclaved separately enclosed under 15 psi pressures at 121°C for 15 min. The medium was later dispensed into 70 mm sterile agar plates and left to set. The agar plates were incubated for 24 h at 37°C confirming their sterility when no growth occurred after 24 h the plates were considered sterile. Agar well diffusion Bacterial strains were tested in MHA media by making (6 mm) well in the media using a steric-borer. Inoculums from exponential growth of each bacteria isolates were mixed using vortex. The turbidity of the reconstituted organisms was adjusted to 0.5 McFarland standards. Both the standard and bacterial suspensions were agitated on vortex mixer medially prior to use. From these suspensions a volume of 100 µl bacteria were inoculated by using micro-pipette. After inoculating the bacterial isolates, the plates were allowed to dry for 5 min after which the crude extracts and the controls were dispensed into each well. The plates were incubated at 37°C for 24 h. The inhibition zone sizes were measured in millimeters compared to standard Gentamicin (Theuretzbacher, 2011). Minimum inhibitory concentration (MIC) The MIC of the extract was determined by MBH dilution technique (Zied et al., 2011). First the leaves of the crude extracts were prepared in different concentrations (6.25, 12.5, 25 and 50 mg/100 mL). Broth containing test tubes was tightly closed, arranged in test tube rank and autoclaved under 15 psi pressures at 121oC for 15 min. The broths were allowed to cool until the temperature is equitable to room temperature. The extracts with different concentrations (100 mL) and the test bacteria [1 × 10 -8 CUF/ml] was aseptically introduced. The inhibition of growth was observed after 24 h incubation at 37°C. The presence of growth was evaluated by comparing the negative control, positive control and culture containing test tubes. The lowest concentration of compound that showed antimicrobial activity against test organisms was recorded Yohannes et al. 3 as MIC value (Igoji et al., 2005). Minimum bactericidal concentration (MBC) Broth containing test tubes that did not show any bacterial growth at MIC was used to determine MBC. Small volumes of these broths are streaked onto the surface of MHA medium by sterile wire loop. The medium was incubated at 37°C for 24 h. The least concentration of plant extracts that effectively inhibit bacterial growth on the agar plate was recorded as MBC of the extracts (Igoji et al., 2005). Data analysis The data collection instrument was experimental through basic laboratory technique. Data like susceptibility was analyzed using SPSS software package version 20.00. Microsoft Excel was employed for analysis of MIC and MBC. Antibacterial sensitivity test Antimicrobial activity of P. stellatum leaves extract was evaluated based on the diameter of clear inhibition zone in millimeters. If there is no inhibition zone, it is assumed that there is no anti-microbial activity. Table 1 Minimum inhibitory concentration (MIC in mg/μl) MIC of all crude extract against the test organisms were performed using broth dilution method. All the test organisms were inhibited by all extract solvents with the range of 6.25 to 50% and the obtained result of MIC rangedfrom 6.25 to 12.5%. Chloroform extract result showed MIC of 6.25 against E. coli, Pseudomonas spp. and Salmonella spp. and 12.5 was for the rest tested organisms. The result of chloroform and other solvents are shown in Table 2. Minimum bactericidal concentration (MBC in mg/μl) The MBC values were performed using agar well diffusion method from the MIC for all test organisms. The range was obtained from 6.25 to 12.5%. MBC of S. aureus for all extracted solvents was 12.5 but the other testes organisms were with different value (Table 3). DISCUSSION The bacterial resistances against multiple antibiotic are of great concerns to both veterinary and human medicine worldwide and have been posing serious problems in the treatment of infectious diseases (Madubuike et al., 2018). Antibiotics which are widely used for the treatment of infectious diseases are under constant threat due to the emergence of antibiotic resistant pathogens (Marathe et al., 2013;El-Banna and Qaddoumi, 2016). Many studies were conducted and information is available on P. stellatum. In East Africa, fresh leaves of P. stellatum were 'chewed or a decoction is drunk to treat tuberculosis and related respiratory diseases (Pandian et al., 2006) used for weight loss or athletic performance enhancement (Girish and Satish, 2008). In Kenya, a root decoction is used by the Maasai against stomach-ache. Juice of the roots is swallowed to treat snake-bites. In Malawi, a root infusion is drunk by women against infertility (Hassan et al., 2007). Tea from the leaves is used to treat fever; packets of leaves are burned under the bed of colic sufferers (Salatino et al., 2007). Agar diffusion methods are the highly recommended method in antibacterial testing (El-Banna and Qaddoumi, 2016). In this study, anti-microbial activity of P. stellatum was conducted; this antimicrobial activity was recorded and analyzed based on the inhibition zone. The current study clearly indicated that chloroform, distilled water, ethanol, and methanol extract of P. stellatum could be able to inhibit the tested bacteria. This result was in line with the previous report by Dhiman et al. (2011). The plant extract shows antibacterial activity against the test organisms due to the plant active ingredients that inhibit bacterial growth (Meenakumari et al., 2011). However, the degree of their inhibition pattern is different; this may be because of the difference in bacterial strain and the kind of solvent used (Zied et al., 2011;Alikhan et al., 2012). The variation in effectiveness of the extracts' concentration against the isolates under study may be attributed to its phytochemical composition couple with better membrane permeability gradient of the bacterial organisms for chemicals and their possible metabolism (Madubuike et al., 2018). Antibacterial activity difference between extracts may be attributed to the fact that different compounds from the plant material get extracted in solvents of different polarities (Marathe et al., 2013;Elizabeth et al., 2015). In our study, ethanol extracts of this plant showed highest inhibition zones compared to the positive control (Gentamicin) while the negative control had no antimicrobial activity. The result was in accordance with previous report by Calderon et al. (2012), in which they are widely used to obtain crude extracts of phytochemicals from plant materials in the herbal medicine industry for therapeutic applications. Chloroform extract has shown the lowest inhibition zone against all bacteria's except Salmonella spp. which was given the lowest inhibition zone with distilled water. All the plant extracts showed minimum inhibition zones at the concentration of 5 mg/ml against all bacteria in agar well diffusion method and the highest was 50 mg/ml. This result was in accordance with the previous results by Biruhalem et al. (2011). The antimicrobial activity of the crude extract may be attributed to a specific compound or a combination of compounds (Marathe et al., 2013;Elizabeth et al., 2015). These bio-actives can be alkaloids, flavonoids, coumarins, saponins and steroids compounds of plant origin known to have antibacterial activity (Marathe et al., 2013). The obtained data showed that all the solvent plant extracts were active against all the test organisms such as Staphylococcus aureus, Shigella spp., Salmonella spp., Pseudomonas spp., S. pyogenes and E. coli. The ethanol extract against all test organism (Gram-positive and Gram-negative bacteria) showed maximum inhibition zone than the other. This is due to the antibiotic active compounds of the plant leave extracted by ethanol is highly effective on all tested organisms (Dhiman et al., 2011). The antimicrobial activities analysis of the crude extract revealed the presence of some of phytochemicals active coumpounds including: flavonoids, steroids, triterpenes, tannins, saponins and alkaloids (Madubuike et al., 2018). Most of the Gram positive bacteria are highly sensitive than Gram negative (Selvamohan et al., 2012;Michael et al., 2013). It is an established fact that the Gram-positive and negative bacteria react differently to antibacterial agents due to the differences in their cell wall component (peptidoglycan) and the ability of these agents to penetrate them (Idris and Abubakar, 2016;Elizabeth et al., 2015;Ko and Stone, 2020). The other possible explanation is that the presence of some bioactive compounds in the extract might be responsible for the extracts higher effect against Gram-positive than Gramnegative bacterium (Lima et al., 2006as cited in Santos et al., 2016Ko and Stone, 2020). But in our results both of them were sensitive; this may be due to the fact that plant extract is active against both Gram negative and positive bacteria. There is a difference when compared with positive control in 5 mg concentration of chloroform extract. E. coli inhibition zone has a significant bacterial difference when compared with other bacteria of 25 mg concentration of chloroform extract. E. coli and S. pyogenes with 50 mg concentration have a difference when compared with others in chloroform extract. Ethanol with 5 mg concentration has a significant dose difference within the group. E. coli has a difference in the positive control in all extract of distilled water. Determining the appropriate concentration required to inhibit and kill the organism and determination of MIC and MBC is crucial in antibacterial experiment, respectively (Idris and Abubakar, 2016). In this study, the MIC results range from 6.25 to 12.5 mg/μl. Methanol and chloroform in concentration of 6.25 mg/μl inhibit E. coli and the other strains with 12.5 mg/μl. Methanol, ethanol and chloroform inhibited S. aureus with 12.5 mg/μl, whereas distilled water with 6.25 mg/μl. Shigella was inhibited with concentration of 6.25 mg/μl for all extracts except with chloroform (12.5 mg/μl). The MBC results of all extracts against the tested organisms showed similar range with MIC. S. aureus of all solvent extract has the range of 12.5% but different in others. Pseudomonas spp. has a significant extract difference with 25 mg of all solvent extract except in distilled water. This result was agreed with the results stated by Surjeet et al. (2011). Active components in plants may provide potential sources of new drugs for the safe and effective treatment of microbial diseases (de Oliveira Santos et al., 2016). Our investigation clearly indicates that leaves of P. stellatum contain a great potential of anti-microbial component which has contributed a great role in pharmaceutical industries and healing various disease. The high potency observed in our study is therefore a rapid response call for further analysis of this plant using higher molecular techniques to ascertain its safety in the management of human and animal diseases (Madubuike et al., 2018). Conclusion The present work demonstrated that P. stellatum leaves extract have the antimicrobial potential on all tested bacteria: S. aureus, Shigella spp., Salmonella spp., Pseudomonas spp., S. pyogenes and E. coli with various solvents: methanol, ethanol, chloroform and distilled water between 5, 25 and 50 mg dose difference. Ethanol and methanol extracts showed high antimicrobial activities than chloroform and distilled water. Ethanol extract has high anti-bacterial effect than others extract and chloroform has less effect. Both the MIC and MBC of the test extract were effective with the lowest concentration. Further studies like isolation and analyzing the specific antibacterial principle, effectiveness of other parts of the plant, the toxicity and isolation of the bioactive compounds are needed to better evaluate the antibacterial potential of the plant.
2022-01-30T16:13:33.471Z
2022-01-31T00:00:00.000
{ "year": 2022, "sha1": "02ed7f6b3d7a5cff20eb887ebaeff43677a0c770", "oa_license": "CCBY", "oa_url": "https://academicjournals.org/journal/JMA/article-full-text-pdf/F2670D968345.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "37616116502a1d8c6d54fa67723b51dc89cf67cf", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
263618956
pes2o/s2orc
v3-fos-license
Design and Application of Memristive Balanced Ternary Univariate Logic Circuit This paper proposes a unique memristor-based design scheme for a balanced ternary digital logic circuit. First, a design method of a single-variable logic function circuit is proposed. Then, by combining with a balanced ternary multiplexer, some common application-type combinational logic circuits are proposed, including a balanced ternary half adder, multiplier and numerical comparator. The above circuits are all simulated and verified in LTSpice, which demonstrate the feasibility of the proposed scheme. Introduction In the era of big data, the amount of data is growing explosively, and as a result, digital logic systems are having difficulty in processing such huge amounts of data while striving for ever-increasing efficiency [1].To meet the demand of data processing speed and power efficiency, ternary logic has received recent attention due to its advantages of higher single-line information carrying capacity and additional logical functions [2][3][4][5][6][7].Compared to the binary digital signal, each bit of the ternary digital signal contains more information, resulting in a higher transmission rate at the same frequency.It also helps in reducing circuit interconnections, and digital chips can be made smaller and less expensive [8][9][10].Ternary logic can be divided into two categories: balanced ternary {−1, 0, 1} and unbalanced ternary {0, 1, 2} or {0, −1, −2} [11].Among them, balanced ternary logic has unique advantages, including the ability of having a unified representation for positive and negative numbers without the sign bit, and multiplication operation without generating a carry.Moreover, the symmetry of one-bit addition and multiplication operations can be used for symmetric arithmetic operation circuit design [12,13]. There are two typical paradigms for designing memristor-based ternary logic circuits; one uses three resistance states of the ternary memristor and the other one uses the voltage value as the logic variable, where the former method makes full use of the resistance change characteristics of the memristor, the operation result can be stored in memristors, the logic state will not be lost after power withdrawal.Several studies [21,22] reported on the unbalanced ternary basic logic gate circuit using the three resistance states of the ternary memristor which correspond to positive ternary logic '0', '1', and '2'.A voltagecontrolled tri-valued memristor model was first proposed in Ref. [22], with designs of ternary AND, OR and NOT gate circuits based on it.In this case, three stable resistance states, R H , R M and R L , correspond to logic '0', '1', and '2', respectively.In Ref. [23], a bipolar three-state ZnO memristor was reported, and then all the 27 possible univariate positive ternary logics were realized with a single memristor cell.Furthermore, Ref. [24] proposed a method of realizing a balanced ternary adder using the resistance state transformation of only one single memristor, in which the circuit area and system power consumption were greatly reduced. Significant advancement has also been achieved in implementing logic circuits using the second method (i.e., employing the voltage value as a logic variable) [25,26].For example, Wang et al. [27] reported the construction of positive ternary logic circuits, including, the ternary AND gate, OR gate, inverters, encoder and decoder circuits.Similarly, in Ref. [28], ternary basic logic gates and combinational logic circuits using memristor-CNTFET hybrid circuit were proposed, whose delay and circuit complexity were lower compared to those of the circuits only using CNTFETs.Ref. [29] proposed a systematic method of constructing a two-digit ternary logic function based on the concept of memristive threshold logic (MTL) and applied this method for constructing basic ternary arithmetic operations.Compared to that of the previously reported relevant circuit design schemes, the circuit area of the ternary adder and ternary multiplier was greatly reduced.In Refs.[5,30], the balanced ternary logic circuits based on a memristor and MOSFET were proposed.The design idea was to construct balanced ternary essential logic gates, such as TAND, TOR, TI, TSUM, NCONS, NANY, etc., and then propose design scheme of a balanced ternary full adder. As a further development in the present study, combinatorial logic circuits are implemented directly by combining univariate logic circuits and multiplexers.The multiplexer uses the circuit proposed in Ref. [31], and its function is to select only one of the data of multiple channels and transmit it to the output terminal according to the state of the selection signal.The proposed design scheme of a memristive balanced ternary digital logic circuit with the voltage value as the logic variable could be beneficial for further improving information storage, processing, and transmission efficiency. The structure of this paper is as follows: Section 2 presents a design scheme of a balanced ternary single-variable logic function circuit based on a hybrid design of memristor and MOS transistor; in Section 3, based on the proposed univariate logic circuits and the multiplexer designed in our previous study [31], balanced ternary application-type combinational logic circuits are designed, including a half adder, multiplier, and numerical comparator; Section 4 presents the comparison and analysis of the proposed circuit with existing designs; Section 5 contains the conclusion of this paper. Balanced Ternary Univariate Logic Circuit In digital logic circuits, univariate logic functions are used to perform corresponding logic transformations on signals, thus playing an important role in circuit design.For ternary logic, there are three possible values for a single-input variable, with 3 3 = 27 possible output results in total, as shown in Table 1. As evident from Table 1, balanced ternary univariate logic can be divided into three categories, such as three-state to one-state logic, three-state to two-state logic, and threestate to three-state logic.The first category (three-state to one-state logic: F 1 , F 14 , and F 27 ) is also called constant logic; that is, irrespective of the input value, the output is a fixed logic state, and therefore their applications are limited in circuit design. Input Output This paper will mainly involve the circuit design of the other two categories, i.e., the balanced ternary three-state to two-state logic, and the three-state to three-state univariate logic, as well as a detailed analysis and simulation verification of the corresponding circuits.All univariate logic circuits are represented by the circuit symbol shown in Figure 1. Input Output As evident from Table 1, balanced ternary univariate logic can be divided into three categories, such as three-state to one-state logic, three-state to two-state logic, and threestate to three-state logic.The first category (three-state to one-state logic: F1, F14, and F27) is also called constant logic; that is, irrespective of the input value, the output is a fixed logic state, and therefore their applications are limited in circuit design.This paper will mainly involve the circuit design of the other two categories, i.e., the balanced ternary three-state to two-state logic, and the three-state to three-state univariate logic, as well as a detailed analysis and simulation verification of the corresponding circuits.All univariate logic circuits are represented by the circuit symbol shown in Figure 1. Circuit symbol of univariate logistic function Fn. Three-State to Two-State Logic From the truth table of the balanced ternary univariate logic function shown in Table 1, there are 18 kinds of univariate logic functions for three-state to two-state logic.Among them, the logics of F19 and F25 correspond to the NTI gate and the PTI gate, respectively, which have been introduced in detail in Ref. [31] and will not be repeated in this section. Circuit Design of Logic Function F4, F5, F9, F10, F13, F18, F23 and F26 Table 2 shows the designed circuit diagram and the threshold voltage range of the MOS transistor.While the circuits of logic functions F4 and F9 only need one memristor and one NMOS transistor, those of the logic functions F5, F9, F10, F13, F18, F23 and F26 are all composed of two memristors and one NMOS transistor.Among them, two groups of logic (F10 and F13) and (F23 and F26) adopt the same circuit structure, but the difference is that the threshold voltage ranges of MOS transistors in the corresponding circuits are different.See Table 2 for details. Three-State to Two-State Logic From the truth table of the balanced ternary univariate logic function shown in Table 1, there are 18 kinds of univariate logic functions for three-state to two-state logic.Among them, the logics of F 19 and F 25 correspond to the NTI gate and the PTI gate, respectively, which have been introduced in detail in Ref. [31] and will not be repeated in this section. 2.1.1.Circuit Design of Logic Function F 4 , F 5 , F 9 , F 10 , F 13 , F 18 , F 23 and F 26 Table 2 shows the designed circuit diagram and the threshold voltage range of the MOS transistor.While the circuits of logic functions F 4 and F 9 only need one memristor and one NMOS transistor, those of the logic functions F 5 , F 9 , F 10 , F 13 , F 18 , F 23 and F 26 are all composed of two memristors and one NMOS transistor.Among them, two groups of logic (F 10 and F 13 ) and (F 23 and F 26 ) adopt the same circuit structure, but the difference is that the threshold voltage ranges of MOS transistors in the corresponding circuits are different.See Table 2 for details. The working principles of these logic functions can be understood via simply analyzing the circuits of logic functions F 4 and F 5 .For F 4 , when input A is −V DD (logic '−1') or 0V (logic '0'), transistor T 1 is turned off, and the output terminal will be directly connected to the input terminal through memristor M 1 , so the output remains consistent with the input.When input A is V DD (logic '1'), transistor T 1 is turned on, and the output terminal will be directly connected to -V DD through T 1 , that is, logic '-1' is the output.For F 5 , when input A is −V DD (logic '−1') or 0V (logic '0'), transistor T 1 is turned off, the output terminal will pass through memristor M 1 , which is directly connected to the input terminal, and the output is consistent with the input.When input A is V DD (logic '1'), transistor T 1 is turned on, and there is a current path flowing from the input terminal to −V DD in the circuit.Both memristors M 1 and M 2 are switched to the R OFF state, and the output terminal is about 0 V after voltage division, that is, the output logic is '0'.Similar methods can be used to verify the correctness of other circuits, which will not be repeated here. Logic Function F4 F5 F9 Circuit Structure The working principles of these logic functions can be understood via simply analyzing the circuits of logic functions F4 and F5.For F4, when input A is −VDD (logic '−1') or 0V (logic '0'), transistor T1 is turned off, and the output terminal will be directly connected to the input terminal through memristor M1, so the output remains consistent with the input.When input A is VDD (logic '1'), transistor T1 is turned on, and the output terminal will be directly connected to -VDD through T1, that is, logic '-1' is the output.For F5, when input A is −VDD (logic '−1') or 0V (logic '0'), transistor T1 is turned off, the output terminal will pass through memristor M1, which is directly connected to the input terminal, and the output is consistent with the input.When input A is VDD (logic '1'), transistor T1 is turned on, and there is a current path flowing from the input terminal to −VDD in the circuit.Both memristors M1 and M2 are switched to the ROFF state, and the output terminal is about 0 V after voltage division, that is, the output logic is '0'.Similar methods can be used to verify the correctness of other circuits, which will not be repeated here. The Circuit Design of the Remaining Three-State to Two-State Logic Function The remaining three-state to two-state logic function circuits, including F2, F3, F7, F11, F15, F17, F21, and F24 logic, can be obtained via cascading the circuits as mentioned above.For example, for the F2 logic circuit, it is only necessary to cascade an F4 logic circuit after the F26 logic circuit to complete the logic conversion corresponding to F2.As shown in Table 3, it is a design scheme of a single-variable three-state to two-state logic circuit designed via the cascade method.Among them, 'Fm + Fn' indicates that the Fn logic circuit is cascaded after the Fm logic circuit. F4 F5 F9 Circuit Structure The working principles of these logic functions can be understood via simply analyzing the circuits of logic functions F4 and F5.For F4, when input A is −VDD (logic '−1') or 0V (logic '0'), transistor T1 is turned off, and the output terminal will be directly connected to the input terminal through memristor M1, so the output remains consistent with the input.When input A is VDD (logic '1'), transistor T1 is turned on, and the output terminal will be directly connected to -VDD through T1, that is, logic '-1' is the output.For F5, when input A is −VDD (logic '−1') or 0V (logic '0'), transistor T1 is turned off, the output terminal will pass through memristor M1, which is directly connected to the input terminal, and the output is consistent with the input.When input A is VDD (logic '1'), transistor T1 is turned on, and there is a current path flowing from the input terminal to −VDD in the circuit.Both memristors M1 and M2 are switched to the ROFF state, and the output terminal is about 0 V after voltage division, that is, the output logic is '0'.Similar methods can be used to verify the correctness of other circuits, which will not be repeated here. The Circuit Design of the Remaining Three-State to Two-State Logic Function The remaining three-state to two-state logic function circuits, including F2, F3, F7, F11, F15, F17, F21, and F24 logic, can be obtained via cascading the circuits as mentioned above.For example, for the F2 logic circuit, it is only necessary to cascade an F4 logic circuit after the F26 logic circuit to complete the logic conversion corresponding to F2.As shown in Table 3, it is a design scheme of a single-variable three-state to two-state logic circuit designed via the cascade method.Among them, 'Fm + Fn' indicates that the Fn logic circuit is cascaded after the Fm logic circuit. Logic Function F4 F5 F9 Circuit Structure The working principles of these logic functions can be understood via simply analyzing the circuits of logic functions F4 and F5.For F4, when input A is −VDD (logic '−1') or 0V (logic '0'), transistor T1 is turned off, and the output terminal will be directly connected to the input terminal through memristor M1, so the output remains consistent with the input.When input A is VDD (logic '1'), transistor T1 is turned on, and the output terminal will be directly connected to -VDD through T1, that is, logic '-1' is the output.For F5, when input A is −VDD (logic '−1') or 0V (logic '0'), transistor T1 is turned off, the output terminal will pass through memristor M1, which is directly connected to the input terminal, and the output is consistent with the input.When input A is VDD (logic '1'), transistor T1 is turned on, and there is a current path flowing from the input terminal to −VDD in the circuit.Both memristors M1 and M2 are switched to the ROFF state, and the output terminal is about 0 V after voltage division, that is, the output logic is '0'.Similar methods can be used to verify the correctness of other circuits, which will not be repeated here. The Circuit Design of the Remaining Three-State to Two-State Logic Function The remaining three-state to two-state logic function circuits, including F2, F3, F7, F11, F15, F17, F21, and F24 logic, can be obtained via cascading the circuits as mentioned above.For example, for the F2 logic circuit, it is only necessary to cascade an F4 logic circuit after the F26 logic circuit to complete the logic conversion corresponding to F2.As shown in Table 3, it is a design scheme of a single-variable three-state to two-state logic circuit designed via the cascade method.Among them, 'Fm + Fn' indicates that the Fn logic circuit is cascaded after the Fm logic circuit. Logic Function F4 F5 F9 Circuit Structure The working principles of these logic functions can be understood via simply analyzing the circuits of logic functions F4 and F5.For F4, when input A is −VDD (logic '−1') or 0V (logic '0'), transistor T1 is turned off, and the output terminal will be directly connected to the input terminal through memristor M1, so the output remains consistent with the input.When input A is VDD (logic '1'), transistor T1 is turned on, and the output terminal will be directly connected to -VDD through T1, that is, logic '-1' is the output.For F5, when input A is −VDD (logic '−1') or 0V (logic '0'), transistor T1 is turned off, the output terminal will pass through memristor M1, which is directly connected to the input terminal, and the output is consistent with the input.When input A is VDD (logic '1'), transistor T1 is turned on, and there is a current path flowing from the input terminal to −VDD in the circuit.Both memristors M1 and M2 are switched to the ROFF state, and the output terminal is about 0 V after voltage division, that is, the output logic is '0'.Similar methods can be used to verify the correctness of other circuits, which will not be repeated here. The Circuit Design of the Remaining Three-State to Two-State Logic Function The remaining three-state to two-state logic function circuits, including F2, F3, F7, F11, F15, F17, F21, and F24 logic, can be obtained via cascading the circuits as mentioned above.For example, for the F2 logic circuit, it is only necessary to cascade an F4 logic circuit after the F26 logic circuit to complete the logic conversion corresponding to F2.As shown in Table 3, it is a design scheme of a single-variable three-state to two-state logic circuit designed via the cascade method.Among them, 'Fm + Fn' indicates that the Fn logic circuit is cascaded after the Fm logic circuit. Logic Function F4 F5 F9 Circuit Structure The working principles of these logic functions can be understood via simply analyzing the circuits of logic functions F4 and F5.For F4, when input A is −VDD (logic '−1') or 0V (logic '0'), transistor T1 is turned off, and the output terminal will be directly connected to the input terminal through memristor M1, so the output remains consistent with the input.When input A is VDD (logic '1'), transistor T1 is turned on, and the output terminal will be directly connected to -VDD through T1, that is, logic '-1' is the output.For F5, when input A is −VDD (logic '−1') or 0V (logic '0'), transistor T1 is turned off, the output terminal will pass through memristor M1, which is directly connected to the input terminal, and the output is consistent with the input.When input A is VDD (logic '1'), transistor T1 is turned on, and there is a current path flowing from the input terminal to −VDD in the circuit.Both memristors M1 and M2 are switched to the ROFF state, and the output terminal is about 0 V after voltage division, that is, the output logic is '0'.Similar methods can be used to verify the correctness of other circuits, which will not be repeated here. The Circuit Design of the Remaining Three-State to Two-State Logic Function The remaining three-state to two-state logic function circuits, including F2, F3, F7, F11, F15, F17, F21, and F24 logic, can be obtained via cascading the circuits as mentioned above.For example, for the F2 logic circuit, it is only necessary to cascade an F4 logic circuit after the F26 logic circuit to complete the logic conversion corresponding to F2.As shown in Table 3, it is a design scheme of a single-variable three-state to two-state logic circuit designed via the cascade method.Among them, 'Fm + Fn' indicates that the Fn logic circuit is cascaded after the Fm logic circuit. Logic Function F4 F5 F9 Circuit Structure The working principles of these logic functions can be understood via simply analyzing the circuits of logic functions F4 and F5.For F4, when input A is −VDD (logic '−1') or 0V (logic '0'), transistor T1 is turned off, and the output terminal will be directly connected to the input terminal through memristor M1, so the output remains consistent with the input.When input A is VDD (logic '1'), transistor T1 is turned on, and the output terminal will be directly connected to -VDD through T1, that is, logic '-1' is the output.For F5, when input A is −VDD (logic '−1') or 0V (logic '0'), transistor T1 is turned off, the output terminal will pass through memristor M1, which is directly connected to the input terminal, and the output is consistent with the input.When input A is VDD (logic '1'), transistor T1 is turned on, and there is a current path flowing from the input terminal to −VDD in the circuit.Both memristors M1 and M2 are switched to the ROFF state, and the output terminal is about 0 V after voltage division, that is, the output logic is '0'.Similar methods can be used to verify the correctness of other circuits, which will not be repeated here. The Circuit Design of the Remaining Three-State to Two-State Logic Function The remaining three-state to two-state logic function circuits, including F2, F3, F7, F11, F15, F17, F21, and F24 logic, can be obtained via cascading the circuits as mentioned above.For example, for the F2 logic circuit, it is only necessary to cascade an F4 logic circuit after the F26 logic circuit to complete the logic conversion corresponding to F2.As shown in Table 3, it is a design scheme of a single-variable three-state to two-state logic circuit designed via the cascade method.Among them, 'Fm + Fn' indicates that the Fn logic circuit is cascaded after the Fm logic circuit. MOS Transistor Threshold Voltage The Circuit Design of the Remaining Three-State to Two-State Logic Function The remaining three-state to two-state logic function circuits, including F 2 , F 3 , F 7 , F 11 , F 15 , F 17 , F 21 , and F 24 logic, can be obtained via cascading the circuits as mentioned above.For example, for the F 2 logic circuit, it is only necessary to cascade an F 4 logic circuit after the F 26 logic circuit to complete the logic conversion corresponding to F 2 .As shown in Table 3, it is a design scheme of a single-variable three-state to two-state logic circuit designed via the cascade method.Among them, 'F m + F n ' indicates that the F n logic circuit is cascaded after the F m logic circuit.To validate the above approach, the proposed circuit is simulated and verified using LTSpice.Figures 2-4 show the simulation waveforms of three kinds of three-state to twostate logic, including the transition from the three-state logic circuits to logic (−1,1), (−1,0) and (0,1).ion Simulation Verification of Three-State to Two-State Logic Circuit To validate the above approach, the proposed circuit is simulated and verified using LTSpice.Figures 2-4 show the simulation waveforms of three kinds of three-state to twostate logic, including the transition from the three-state logic circuits to logic (−1,1), (−1,0) and (0,1). Simulation results of transition from three-state logic circuits to logic (−1,1). Simulation results of transition from three-state logic circuits to logic (−1,0). Simulation results of transition from three-state logic circuits to logic (0,1). Three-State to Three-State Logic There are 6 types of single-variable logic functions in this category, including F6, F8, F12, F16, F20 and F22.Among them, the output of F6 is equal to the input, which is called 'follower logic'.Only five types of three-state to three-state logic are effective and used in Simulation Verification of Three-State to Two-State Logic Circuit To validate the above approach, the proposed circuit is simulated and verified using LTSpice.Figures 2-4 show the simulation waveforms of three kinds of three-state to twostate logic, including the transition from the three-state logic circuits to logic (−1,1), (−1,0) and (0,1). Simulation results of transition from three-state logic circuits to logic (−1,1). Simulation results of transition from three-state logic circuits to logic (−1,0). Three-State to Three-State Logic There are 6 types of single-variable logic functions in this category, including F6, F8, F12, F16, F20 and F22.Among them, the output of F6 is equal to the input, which is called 'follower logic'.Only five types of three-state to three-state logic are effective and used in Simulation Verification of Three-State to Two-State Logic Circuit To validate the above approach, the proposed circuit is simulated and verified using LTSpice.Figures 2-4 show the simulation waveforms of three kinds of three-state to twostate logic, including the transition from the three-state logic circuits to logic (−1,1), (−1,0) and (0,1). Simulation results of transition from three-state logic circuits to logic (−1,1). Simulation results of transition from three-state logic circuits to logic (−1,0). Three-State to Three-State Logic There are 6 types of single-variable logic functions in this category, including F6, F8, F12, F16, F20 and F22.Among them, the output of F6 is equal to the input, which is called 'follower logic'.Only five types of three-state to three-state logic are effective and used in Three-State to Three-State Logic There are 6 types of single-variable logic functions in this category, including F 6 , F 8 , F 12 , F 16 , F 20 and F 22 .Among them, the output of F 6 is equal to the input, which is called 'follower logic'.Only five types of three-state to three-state logic are effective and used in circuit design.However, the F 22 logic (STI gate) circuit has been discussed in detail previously [31], and the remaining four logic circuits will be introduced here.The circuit structure diagram of the up-spin logic function, F 16 , the down-spin logic function, F 20 , and the threshold voltage range of the MOS transistor used is shown in Table 4.The F 16 circuit uses two memristors and two NMOS transistors, while the F 20 circuit uses three memristors and three NMOS transistors.In the case of F 20 , when input A is −V DD (logic '−1'), MOS transistors T 1 , T 2 , and T 3 are all turned off, and the output terminal is pulled up to V DD through memristor M 1 , that is, the output logic is '1'.When input A is 0 V (logic '0'), both T 1 and T 2 are turned off, T 3 is turned on, and the output terminal is directly connected to −V DD through T 2 , that is, the output logic is '−1'.When input A is V DD (logic '1'), both T 1 and T 2 are turned on, T 3 is turned off, and there is a current path from V DD to −V DD in the circuit.Both memristors M 1 and M 2 are switched to the R OFF state, and the output terminal outputs a voltage nearly 0V, that is, the output logic is '0'.The correctness of spin-up logic function F 16 can be verified via a similar method, which will not be repeated here.three memristors and three NMOS transistors.In the case of F20, when input A is − (logic '−1'), MOS transistors T1, T2, and T3 are all turned off, and the output termina pulled up to VDD through memristor M1, that is, the output logic is '1'.When input A V (logic '0'), both T1 and T2 are turned off, T3 is turned on, and the output termina directly connected to −VDD through T2, that is, the output logic is '−1'.When input A is (logic '1'), both T1 and T2 are turned on, T3 is turned off, and there is a current path fr VDD to −VDD in the circuit.Both memristors M1 and M2 are switched to the ROFF state, the output terminal outputs a voltage nearly 0V, that is, the output logic is '0'.correctness of spin-up logic function F16 can be verified via a similar method, which w not be repeated here. Logic Function Up-Spin Logic Function, F16 Down-Spin Logic Function, F20 Circuit Structure The Circuit Design of the Remaining Three-State to Three-State Logic Function The remaining three-state to three-state logic function circuits, including F8 and logic, can also be obtained via cascading the circuits mentioned above.For example, the F8 logic circuit, it is only necessary to cascade an F22 logic circuit after the F20 lo circuit to complete logic conversion corresponding to F8.Similarly, the F12 logic circuit be obtained via cascading F16 logic and F22 logic.Table 5 shows the design scheme of univariate three-state to three-state logic circuit using the cascade method.The term 'F Fn' indicates that the Fn logic circuit is cascaded after the Fm logic circuit.Table 5. Design scheme of univariate three-state to three-state logic circuit designed via casc method. Logic Function three memristors and three NMOS transistors.In the case of F20, when input A is −VDD (logic '−1'), MOS transistors T1, T2, and T3 are all turned off, and the output terminal is pulled up to VDD through memristor M1, that is, the output logic is '1'.When input A is 0 V (logic '0'), both T1 and T2 are turned off, T3 is turned on, and the output terminal is directly connected to −VDD through T2, that is, the output logic is '−1'.When input A is VDD (logic '1'), both T1 and T2 are turned on, T3 is turned off, and there is a current path from VDD to −VDD in the circuit.Both memristors M1 and M2 are switched to the ROFF state, and the output terminal outputs a voltage nearly 0V, that is, the output logic is '0'.The correctness of spin-up logic function F16 can be verified via a similar method, which will not be repeated here. Logic Function Up-Spin Logic Function, F16 Down-Spin Logic Function, F20 Circuit Structure The Circuit Design of the Remaining Three-State to Three-State Logic Function The remaining three-state to three-state logic function circuits, including F8 and F12 logic, can also be obtained via cascading the circuits mentioned above.For example, for the F8 logic circuit, it is only necessary to cascade an F22 logic circuit after the F20 logic circuit to complete logic conversion corresponding to F8.Similarly, the F12 logic circuit can be obtained via cascading F16 logic and F22 logic.Table 5 shows the design scheme of the univariate three-state to three-state logic circuit using the cascade method.The term 'Fm + Fn' indicates that the Fn logic circuit is cascaded after the Fm logic circuit.Table 5. Design scheme of univariate three-state to three-state logic circuit designed via cascade method. The Circuit Design of the Remaining Three-State to Three-State Logic Function The remaining three-state to three-state logic function circuits, including F 8 and F 12 logic, can also be obtained via cascading the circuits mentioned above.For example, for the F 8 logic circuit, it is only necessary to cascade an F 22 logic circuit after the F 20 logic circuit to complete logic conversion corresponding to F 8 .Similarly, the F 12 logic circuit can be obtained via cascading F 16 logic and F 22 logic.Table 5 shows the design scheme of the univariate three-state to three-state logic circuit using the cascade method.The term 'F m + F n ' indicates that the F n logic circuit is cascaded after the F m logic circuit.The above circuit was simulated in LTSpice, which provides a verification of the design for a given input signal.The simulation waveform diagram of the three-state to three-state logic circuit is shown in Figure 5. Verification of Three-State to Three-State Logic Circuit Using LTSpice Simulation The above circuit was simulated in LTSpice, which provides a verification of the design for a given input signal.The simulation waveform diagram of the three-state to three-state logic circuit is shown in Figure 5. Design of Balanced Three-Valued Combinational Logic Circuit Based on Univariate Logic and Multiplexer A multiplexer can select one of several input signals to the output.This paper uses the balanced ternary multiplexer circuit proposed in Ref. [31], which can realize the output of one signal from the three inputs.The corresponding input-output relationship is Design of Balanced Three-Valued Combinational Logic Circuit Based on Univariate Logic and Multiplexer A multiplexer can select one of several input signals to the output.This paper uses the balanced ternary multiplexer circuit proposed in Ref. [31], which can realize the output of one signal from the three inputs.The corresponding input-output relationship is expressed as follows: Here, S is a selection signal, and I −1 , I 0 , and I 1 are three input signals.The multiplexer is composed of a balanced ternary one-line-one-line decoder, three balanced ternary minimum gates and one balanced ternary maximum gate.The circuit structure diagram is shown in Figure 6. Design of Balanced Three-Valued Combinational Logic Circuit Based on Univariate Logic and Multiplexer A multiplexer can select one of several input signals to the output.This paper uses the balanced ternary multiplexer circuit proposed in Ref. [31], which can realize the output of one signal from the three inputs.The corresponding input-output relationship is expressed as follows: Here, S is a selection signal, and I−1, I0, and I1 are three input signals.The multiplexer is composed of a balanced ternary one-line-one-line decoder, three balanced ternary minimum gates and one balanced ternary maximum gate.The circuit structure diagram is shown in Figure 6.When the selection signal is S =−1, the output terminals S−1, S0 and S1 of the one-linethree-line decoder output logic 1, −1 and −1, respectively.According to the working principle of the minimum value gate and the maximum value gate, the output signal of the circuit is equal to input signal I−1, that is, Y = I−1, and the circuit realizes the function of output signal I-1.When the selection signal S = 0, the output terminals S−1, S0 and S1 of the decoder output the logic −1, 1, −1, respectively.In this case, Y = I0, that is, the circuit realizes the function of outputting signal I0.Finally, when the selection signal S = 1 occurs, the output o decoder terminals S−1, S0, and S1 output the logic −1, −1, and 1, respectively, resulting in Y = I1.When the selection signal is S =−1, the output terminals S −1 , S 0 and S 1 of the oneline-three-line decoder output logic 1, −1 and −1, respectively.According to the working principle of the minimum value gate and the maximum value gate, the output signal of the circuit is equal to input signal I −1 , that is, Y = I −1 , and the circuit realizes the function of output signal I -1 .When the selection signal S = 0, the output terminals S −1 , S 0 and S 1 of the decoder output the logic −1, 1, −1, respectively.In this case, Y = I 0 , that is, the circuit realizes the function of outputting signal I 0 .Finally, when the selection signal S = 1 occurs, the output o decoder terminals S −1 , S 0 , and S 1 output the logic −1, −1, and 1, respectively, resulting in Y I 1 . In this paper, a balanced ternary half adder, a balanced ternary multiplier and a balanced ternary numerical comparator are also designed using the multiplexer and the univariate logic circuit described in Section 2. The truth tables and circuit structures of these applications are summarized in Tables 6 and 7, respectively.The design process and working principle of each circuit are explained in the following three subsections, along with the corresponding simulation results. Balanced Ternary Half Adder It can be seen from the truth table that when input signals A = −1 and B selects f the values of {−1, 0, 1}, the 'SUM' outputs the sum of the half adder outputs, corresponding to {1, −1, 0}.According to the working principle of the multiplexer, if A is used as the selection signal, we can obtain the following results.When A = −1, the multiplexer selects the I-1 input terminal for the output, that is, SUM = I-1.And as shown in Table 1 a univariate logic F20 just can fulfill the conversion demanded in the red square in Table 6, so F20 is selected to connect the input B and I-1 in the circuit.Similarly, when input signal A = 0, the sum output of the half adder is SUM = I0 = B, so we directly connect B to I0.When input signal A = 1, SUM = I1, the logic F16 is consistent with the conversion, so F16 is selected to connect the input B and I1 in this case.The 'CARRY' output circuit part is designed in the same way.Figure 7 shows the corresponding logic conversion diagram of the balanced ternary half adder. Balanced Ternary Half Adder It can be seen from the truth table that when input signals A = −1 and B selects f the values of {−1, 0, 1}, the 'SUM' outputs the sum of the half adder outputs, corresponding to {1, −1, 0}.According to the working principle of the multiplexer, if A is used as the selection signal, we can obtain the following results.When A = −1, the multiplexer selects the I-1 input terminal for the output, that is, SUM = I-1.And as shown in Table 1 a univariate logic F20 just can fulfill the conversion demanded in the red square in Table 6, so F20 is selected to connect the input B and I-1 in the circuit.Similarly, when input signal A = 0, the sum output of the half adder is SUM = I0 = B, so we directly connect B to I0.When input signal A = 1, SUM = I1, the logic F16 is consistent with the conversion, so F16 is selected to connect the input B and I1 in this case.The 'CARRY' output circuit part is designed in the same way.Figure 7 shows the corresponding logic conversion diagram of the balanced ternary half adder. Balanced Ternary Half Adder It can be seen from the truth table that when input signals A = −1 and B selects f the values of {−1, 0, 1}, the 'SUM' outputs the sum of the half adder outputs, corresponding to {1, −1, 0}.According to the working principle of the multiplexer, if A is used as the selection signal, we can obtain the following results.When A = −1, the multiplexer selects the I-1 input terminal for the output, that is, SUM = I-1.And as shown in Table 1 a univariate logic F20 just can fulfill the conversion demanded in the red square in Table 6, so F20 is selected to connect the input B and I-1 in the circuit.Similarly, when input signal A = 0, the sum output of the half adder is SUM = I0 = B, so we directly connect B to I0.When input signal A = 1, SUM = I1, the logic F16 is consistent with the conversion, so F16 is selected to connect the input B and I1 in this case.The 'CARRY' output circuit part is designed in the same way.Figure 7 shows the corresponding logic conversion diagram of the balanced ternary half adder. Balanced Ternary Half Adder It can be seen from the truth table that when input signals A = −1 and B selects f the values of {−1, 0, 1}, the 'SUM' outputs the sum of the half adder outputs, corresponding to {1, −1, 0}.According to the working principle of the multiplexer, if A is used as the selection signal, we can obtain the following results.When A = −1, the multiplexer selects the I -1 input terminal for the output, that is, SUM = I -1 .And as shown in Table 1 a univariate logic F 20 just can fulfill the conversion demanded in the red square in Table 6, so F 20 is selected to connect the input B and I -1 in the circuit.Similarly, when input signal A = 0, the sum output of the half adder is SUM = I 0 = B, so we directly connect B to I 0 .When input signal A = 1, SUM = I 1 , the logic F 16 is consistent with the conversion, so F 16 is selected to connect the input B and I 1 in this case.The 'CARRY' output circuit part is designed in the same way.Figure 7 shows the corresponding logic conversion diagram of the balanced ternary half adder. Micromachines 2023, 14, 1895 9 of 13 According to the univariate logic function relationship in Table 1, for the 'sum' output part, the three logic conversion relationships correspond to the down-spin logic function, F20, the follow-up logic function, F6, and the up-spin logic function, F16.For the 'carry' output part, the three logical conversion relationships correspond to the logical functions F5, F14, and F15.Therefore, it is only necessary to introduce the corresponding univariate logic circuit into circuit design.The LTSpice simulation waveform diagram is given in Figure 8.According to the univariate logic function relationship in Table 1, for the 'sum' output part, the three logic conversion relationships correspond to the down-spin logic function, F 20 , the follow-up logic function, F 6 , and the up-spin logic function, F 16 .For the 'carry' output part, the three logical conversion relationships correspond to the logical functions F 5 , F 14 , and F 15 .Therefore, it is only necessary to introduce the corresponding univariate logic circuit into circuit design.The LTSpice simulation waveform diagram is given in Figure 8. According to the univariate logic function relationship in Table 1, for the 'sum' output part, the three logic conversion relationships correspond to the down-spin logic function, F20, the follow-up logic function, F6, and the up-spin logic function, F16.For the 'carry' output part, the three logical conversion relationships correspond to the logical functions F5, F14, and F15.Therefore, it is only necessary to introduce the corresponding univariate logic circuit into circuit design.The LTSpice simulation waveform diagram is given in Figure 8. Balanced Ternary Multiplier Balanced ternary does not generate carry during multiplication, so it has certain advantages over the unbalanced ternary logic.The multiplier circuits design is as follows: When A = −1, the multiplexer selects the I-1 input terminal for the output, According to Tables 1 and 6, F22 can be selected to connect the input B and I-1 in the circuit.When A = 0, the I0 terminal of the multiplexer is gated, and now the output terminal outputs a logic '0', so we can directly connect I0 to the ground.When A = 1, the logic value of the output terminal is consistent with the input signal B, so we connect input signal B to the I1 terminal of the multiplexer in this case.Figure 9 shows the LTSpice simulation waveform diagram of the circuit. Balanced Ternary Multiplier Balanced ternary does not generate carry during multiplication, so it has certain advantages over the unbalanced ternary logic.The multiplier circuits design is as follows: When A = −1, the multiplexer selects the I -1 input terminal for the output, According to Tables 1 and 6, F 22 can be selected to connect the input B and I -1 in the circuit.When A = 0, the I 0 terminal of the multiplexer is gated, and now the output terminal outputs a logic '0', so we can directly connect I 0 to the ground.When A = 1, the logic value of the output terminal is consistent with the input signal B, so we connect input signal B to the I 1 terminal of the multiplexer in this case.Figure 9 shows the LTSpice simulation waveform diagram of the circuit. Balanced Ternary Numerical Comparator As we known, the output of multiplexer equals to I-1 when the input signal A is selected as −1, that is, MLE = I-1.And according to the truth Tables 1 and 6, logic F10 performs the some function when input A = −1.so F10 is selected to connect the input B and I-1 in the circuit.Similarly, logics F22 and F26 are chosen to perform the corresponding functions when input A = 0 and A = 1. Figure 10 shows the simulation results for a balanced ternary numerical comparator. Comparison and Analysis The number of components using the proposed method are given in Table 8 and are compared with that reported earlier [31].It is evident that there are significant advantages of the proposed method in terms of the balanced ternary half adder, multiplier, and numerical comparator circuit as the number of circuit components is reduced by 37.8%, 39.5%, and 48.2%, respectively. Balanced Ternary Numerical Comparator As we known, the output of multiplexer equals to I -1 when the input signal A is selected as −1, that is, MLE = I -1 .And according to the truth Tables 1 and 6, logic F 10 performs the some function when input A = −1.so F 10 is selected to connect the input B and I -1 in the circuit.Similarly, logics F 22 and F 26 are chosen to perform the corresponding functions when input A = 0 and A = 1. Figure 10 shows the simulation results for a balanced ternary numerical comparator. Balanced Ternary Numerical Comparator As we known, the output of multiplexer equals to I-1 when the input signal A is selected as −1, that is, MLE = I-1.And according to the truth Tables 1 and 6, logic F10 performs the some function when input A = −1.so F10 is selected to connect the input B and I-1 in the circuit.Similarly, logics F22 and F26 are chosen to perform the corresponding functions when input A = 0 and A = 1. Figure 10 shows the simulation results for a balanced ternary numerical comparator. Comparison and Analysis The number of components using the proposed method are given in Table 8 and are compared with that reported earlier [31].It is evident that there are significant advantages of the proposed method in terms of the balanced ternary half adder, multiplier, and circuit was compared with other design methods.Our results show that the number of components can be significantly reduced using the proposed design method, which could further reduce the complexity of the circuit. Figure 1 . Figure 1. Circuit symbol of univariate logistic function F n. F 16 and Down-Spin Logic Function F 20 1 Figure 5 . Figure 5. Simulation waveform diagram of three-state to three-state univariate logic circuit. Figure 5 . Figure 5. Simulation waveform diagram of three-state to three-state univariate logic circuit. Figure 5 . Figure 5. Simulation waveform diagram of three-state to three-state univariate logic circuit. Figure 7 . Figure 7.The corresponding logic conversion diagram of the balanced ternary half adder.(a) 'sum' output part; (b) 'carry' output part. Figure 7 . Figure 7.The corresponding logic conversion diagram of the balanced ternary half adder.(a) 'sum' output part; (b) 'carry' output part. Table 1 . Balanced ternary univariate logic function truth table. Table 1 . Balanced ternary univariate logic function truth table. Table 2 . Structure diagram of three-state to two-state logic circuit and threshold voltage of MOS transistor. Table 2 . Structure diagram of three-state to two-state logic circuit and threshold voltage of MOS transistor. Table 2 . Structure diagram of three-state to two-state logic circuit and threshold voltage of MOS transistor. Table 2 . Structure diagram of three-state to two-state logic circuit and threshold voltage of MOS transistor. Table 2 . Structure diagram of three-state to two-state logic circuit and threshold voltage of MOS transistor. Table 2 . Structure diagram of three-state to two-state logic circuit and threshold voltage of MOS transistor. Table 2 . Structure diagram of three-state to two-state logic circuit and threshold voltage of MOS transistor. Table 3 . The scheme of univariate three-state to two-state logic circuit designed via the cascade method. Table 4 . Circuit structure diagram of up-spin logic function, down-spin logic function and threshold voltage of MOS transistor. Table 4 . Circuit structure diagram of up-spin logic function, down-spin logic function and thresh voltage of MOS transistor. Table 4 . Circuit structure diagram of up-spin logic function, down-spin logic function and threshold voltage of MOS transistor. Table 5 . Design scheme of univariate three-state to three-state logic circuit designed via cascade method. Table 6 . Truth table of balanced ternary half adder, balanced ternary multiplier, and balanced ternary numerical comparator. Table 7 . Circuit structure of each application. Table 7 . Circuit structure of each application. Table 7 . Circuit structure of each application. Table 7 . Circuit structure of each application.
2023-10-04T15:08:59.034Z
2023-09-30T00:00:00.000
{ "year": 2023, "sha1": "47118cdc9023d8e8a013c480c3ceb4e8ab91730b", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-666X/14/10/1895/pdf?version=1696088588", "oa_status": "CLOSED", "pdf_src": "PubMedCentral", "pdf_hash": "9d10d3c8f14d8bac88b249e50f2a6a59bf264440", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
69634513
pes2o/s2orc
v3-fos-license
ARIMA model forecast based on EViews software Time series is a series of data obtained in chronological order. Future values of most time series can be forecasted according to current values and past values. The EViews software is a software package specifically designed to process time series data. Autoregressive Integrated Moving Average (ARIMA) model, a time series forecast method, can be achieved with the EViews software. Based on the EViews software, the forecast procedure with ARIMA model is illustrated in this work. As an example, the Gross Domestic Product (GDP) of China is forecasted from 2016 to 2018. Introduction EViews is an acronym for Econometrics Views. Literally, it is translated as econometric observations, often referred to as econometric software packages. Originally, it can be used to "observe" quantitative rules between socio-economic relations and economic activities, with econometrics method and technique. The software is a tool package developed by Quantitative Micro Software (QMS) of the United States for data analysis, regression analysis and prediction under the Windows operating system. It can be used to quickly find statistical relationships from data and predict future values. EViews combines spreadsheets, database technologies and traditional statistical software analysis capabilities and provides advantage of the visualization operations for modern Windows software. It uses the mouse to operate standard Windows menus and dialogs, and the related results appear in the window and can be processed using standard Windows technology. Additionally, EViews has powerful command functions and batch language features. Some commands including edit and execute can be inputted on the command line and stored in program files which can easily be called in subsequent projects. It supports some file format such as Excel, SPSS, SAS, Stata, RATS, TSP. It can connected to the ODBC database. EViews has been widely used in the fields of applied in econometrics, macroeconomic forecasting, sales forecasting, financial analysis, cost analysis, Monte Carlo simulation, data analysis and evaluation. It has become one of the most widely used econometric statistical software in the world. With the development of society, the deepening of theoretical research and the improvement of observation methods, people have acquired more and more data series. Time series analysis plays an increasingly important role in statistics and forecasting techniques. As for stationary time series, someone can use the Autoregressive (AR) model, the Moving Average (MA) model, or the Autoregressive and Moving Average (ARMA) model to fit series and forecast the future value of the time series. Actually, some time series are often non-stationary. Someone has to make a differential operation. Then ARMA model prediction can be applied. The process is so-called the Autoregressive Integrated Moving Average Model (ARIMA) [1,2]. Based on the EViews software, the modeling and forecast procedure with ARIMA model is illustrated in this work. ARIMA Model The ARIMA model, a time series prediction method, was proposed by Box and Jenkins in the 1970s. The model consists of AR, I, and MA. Here AR represents the Autoregressive model, I represents the Integration indicating the order of single integer, and MA represents the Moving Average model. In general, a stationary sequence can establish a metrology model. The unit root test is used to judge the stationarity of the sequence. As for a non-stationary sequence, it should be converted to a stationary sequence with difference operation. The number of corresponding difference is called as the order of single integer. The ARIMA (p, D, q) model is essentially a combination of differential operation and ARMA (p, q) model [3,4]. A non-stationary I (D) process is one that can be made stationary by taking D differences. The process is often called difference-stationary or unit root processes. A series that can be modeled as a stationary ARMA (p,q) process after being differenced D times is denoted by ARIMA (p,D,q) [5]. The form of the ARIMA (p,D,q) model is Where ΔDyt denotes a D-th differenced series, and εt is an uncorrelated process with mean zero. In lag operator notation, Liyt=yt−i. The ARIMA (p,D,q) model can be written as Here, ϕ * (L) is an unstable AR operator polynomial with exactly D unit roots. Someone can factor this polynomial as ϕ(L)(1−L)D, where ϕ(L)=(1−ϕ1L−……−ϕpLp) is a stable degree p AR lag operator polynomial. Similarly, θ(L)=(1+θ1L+…+θqLq) is an invertible degree q MA lag operator polynomial. The ARIMA model is a commonly used time series model and a short-term prediction model with high precision. The basic idea of the model is that some time series are a set of random variables that depend on time, but the changes of the entire time series have certain rules, which can be approximated by the corresponding mathematical model. Through the analysis of the mathematical model, it can understand the structure and characteristics of time series more fundamentally and achieve the optimal prediction in the sense of minimum variance. Procedure of ARIMA modeling The procedure flow chart of ARIMA modeling and forecasting is given in Figure 1. The ARIMA modeling is a procedure of determining the parameters p, D and q [6,7,8,9]. The detailed process of ARIMA modeling is as follows: (1) Identifying the stationarity of the time series. The stationarity of the sequence is judged based on line graph, scatter plot, autocorrelation function and partial autocorrelation function graphs of the time series. The unit root of Augmented Dickey-Fuller (ADF) is usually used to test the variance, trend and seasonal variation and identified the stationarity. (2) Determining the order of single integer D. If the time series is a stationary sequence, going directly into Step (3). If the time series is a non-stationary sequence, appropriate transformation (including difference, variance stationarity, logarithm, square root) should be used to be converted to a stationary sequence. The number of differences is the order of single integer. (3) ARMA modeling. As for the result sequence of Step (2), autocorrelation coefficient (ACF) and partial autocorrelation coefficient (PACF) of the sequence are calculated. And the values of the autocorrelation order p and the moving average order q of the ARMA model can be estimated. The basic principle for determining the order p and q is given in Table 1. (4) Performing parameter estimation. The autocorrelation and partial autocorrelation graphs are used to judge the number of autocorrelation coefficients and partial autocorrelation coefficients with remarkable significant level. In the step the roughly model of the sequence can be selected. (5) Diagnostic test and optimization. The model is diagnosed and optimized by performing a white noise test on the residual. If the residual is not a white noise, return to Step (4) and re-select the model. If the residual is a white noise, return to Step (4) and create multiple models, and choose the optimal model from all the fitted model of the test. Procedure of ARIMA forecasting The future value of a time series can be forecasted with the ARIMA model. An important application of EViews software is modeling and prediction based on ARIMA model. The EViews software gives two prediction methods, Static and Dynamic. Static is a one-step advance prediction, and Dynamic is a short-term dynamic prediction. The procedure is as follows: (1) If the time series is a non-stationary sequence, it should be firstly converted to a stationary sequence. The best model parameters are selected and the ARIMA (p, D, q) model is established. (2) In the Equation window of the EViews software, select the Forecast menu. In the dialog box, Static or Dynamic can be selected as needed. Someone can modify the name of forecasting sequence or use the default value, and click OK. Data description The Gross Domestic Product (GDP) is the core indicator of national economic accounting. It is an important index to measure the overall economic situation of some country. It reflects the country's economic strength, structural layout and market scale. In 2016, the National Bureau of Statistics of China revised research and development expenditure accounting method according to the international standard of national economic accounting jointly promulgated by the five major international organizations, the National Account System 2008 was revised. The revised China GDP data from 1952 to 2015 is listed in Table. 2. We make modeling and prediction of the GDP data in the following sections. Stationarity test The GDP data series during 1952-2015 is plotted is Figure. 2. The result of the stationarity test (ADF test) on the data is given in Table 3. It can be seen that ADF=3.651882 is greater than the critical value of the significance level of 0.01, 0.05 and 0.1, that is to say, the original GDP sequence is non-stationary. In Figure 2, the original sequence is exponential. Taking the natural logarithm of the GDP data to eliminate its non-stationary and obtaining the LGDP sequence. And taking LGDP for ADF test, ADF=1.397117 is still greater than the critical value of the significance level of 0.01, 0.05 and 0.1. The LGDP sequence still accepts the null hypothesis with a large P value. The LGDP sequence is still nonstationary. Further, the first-order difference is performed and a DLGDP sequence is obtained. The results of the ADF test for the DLGDP sequence is given in Table 4. It can be seen that ADF=-4.340237 is less than the three critical values of the test level. That is to say, the DLGDP sequence after the logarithmic change and the first-order difference is a stationary series, and the significance test of the stationarity is passed. It can be seen that the original GDP sequence is a first-order single-order sequence, that is, LGDP ~ I (1). Model Identification With the EViews software, the autocorrelation and partial autocorrelation function graphs of the DLGDP series are plotted in Figure 3. It can be seen from Figure 3 that the autocorrelation coefficient of the DLGDP sequence is significantly non-zero when the lag order is 1. And it is basically in the confidence band when the lag order is greater than 1, so q can be taken 1. The partial autocorrelation coefficient is significantly nonzero when the lag order is equal to 1, and it is also significantly different from 0 when the lag order is 2, so p=1 or p=2 can be considered. Considering that the judgment is very subjective, to establish a more accurate model, the range of values of p and q is appropriately relaxed, and multiple ARMA (p, q) APEE2018 IOP Conf. Series: Earth and Environmental Science 208 (2018) 012017 IOP Publishing doi:10.1088/1755-1315/208/1/012017 6 models are established. The order with 0, 1, 2 in autoregressive moving average pre-estimation is performed on the processed sample data. Table 5 lists the test results of ARMA (p, q) for different parameters. Adjusted R-squared, AIC value, SC value and S.E. of regression are all important criteria for selecting models. The AIC criterion and the SC criterion are mainly used for ranking, and select the optimal model. Generally, the larger the coefficient of determination, the smaller the AIC value and the SC value, and the residual variance. The corresponding ARMA (p,q) model is superior. It should be emphasized that although the appropriate ARMA model is usually selected using the AIC value and the SC value. However the minimum AIC value and the SC value are not sufficient conditions for the optimal ARMA model. The method used in this work is to first establish a model with the minimum AIC value and SC value, and perform a parameter significance test and a residual randomness test on the estimation result. If it passes the test, the model can be regarded as the optimal model; if it cannot pass the test, the second smallest AIC value and SC value are selected and the relevant statistical test is performed. And so on, until the appropriate model is selected. In Table 5, the model that did not pass the parameter significance test and the residual randomness test was identified by "*". Finally, it is preferable to prefer the ARMA (1, 0) model. Model establishment and inspection The estimated results with the ARIMA model are as follows: It can be seen from the t statistic of the model coefficients and its P value that the parameter estimates of all explanatory variables of the model are significant at the significance level of 0.01. The model is used to fit the DLGDP data, and the result is shown in Figure 4. In the figure, the actual data is given by the solid line, and the upper and lower dotted lines correspond to the fitted values and residual of the model. A white noise test is performed on the residual after fitting the ARIMA (1, 1, 0) model. The autocorrelation and partial autocorrelation function graphs of the residual series are shown in Figure 5. It can be seen that the residual is a white noise, indicating that the model is valid. Data forecasting Firstly, the model is used to analyse the fitting effect with the GDP value in 2015. The forecast value in 2015 is 705847.6 hundred million RMB. The actual value is 685506 hundred million RMB and the relative error is 2.97%. It can be seen that the forecast value is close to the actual result, indicating that the model has a good fitting effect. Under the graphical interface in EViews software, Dynamic forecast mode is used to predict the GDP values from 2016 to 2018. The results are listed in Table 7. The National Bureau of Statistics of China has not officially released revised GDP data in 2016 and 2017. According to the official website of the National Bureau of Statistics, the verification value of GDP in 2016 is 743585 hundred million RMB, and the preliminary value of GDP in 2017 is 827122 hundred million RMB. Here, the relative errors of GDP forecasting in 2016 and 2017 are 0.09% and 1.17%, respectively. Summary ARIMA model forecast is a relatively advanced time series prediction method. It can realistically describe the dynamic change rules. It can be used to perform statistical analysis and forecast for time series under certain conditions. Specially, the model is suitable for short-term predictions. Large deviations occur when the forecasting time scale is long. Based on EViews software, this work gives time series modeling and forecasting with the ARIMA model. It should be noted that as for a specific time series that is subject to many factors, model predictions that rely solely on current values and historical data sometimes have a certain degree of deviation from the real situation.
2019-02-19T14:08:36.227Z
2018-12-20T00:00:00.000
{ "year": 2018, "sha1": "06ec6b1622c4b85a46d4bb8327d0e1af861f2673", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/208/1/012017", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "bd92c860157eb07937e8c2536ca307fc940c7986", "s2fieldsofstudy": [ "Economics", "Computer Science" ], "extfieldsofstudy": [ "Physics", "Computer Science" ] }
139100372
pes2o/s2orc
v3-fos-license
Editorial: Actinobacteria in Special and Extreme Habitats: Diversity, Function Roles and Environmental Adaptations, Second Edition 1 The Key Laboratory of Biotechnology for Medicinal Plant of Jiangsu Province, Jiangsu Normal University, Xuzhou, China, 2 State Key Laboratory of Biocontrol and Guangdong Provincial Key Laboratory of Plant Resources, School of Life Sciences, Sun Yat-sen University, Guangzhou, China, 3 Southern Laboratory of Ocean Science and Engineering (Guangdong, Zhuhai), Zhuhai, China, 4 School of Natural and Environmental Sciences, Newcastle University, Newcastle upon Tyne, United Kingdom, 5 Bioproducts Research Chair, Zoology Department, College of Science, King Saud University, Riyadh, Saudi Arabia, Botany and Microbiology Department, Faculty of Science, Beni-Suef University, Beni-Suef, Egypt, National Agricultural Research Center, Bio-resources Conservation Institute, Islamabad, Pakistan Actinobacteria in Special and Extreme Habitats: Diversity, Function Roles and Environmental Adaptations, Second Edition Actinobacteria produce structurally diverse bioactive natural products, such as enzymes, antibiotics, antitumor and immune regulatory agents. Actinobacteria are not only the main producers of microbial-derived drugs, they also play an important role as symbionts in plantassociated microbial communities (Barka et al., 2015). At the same time, members of the phylum Actinobacteria were found to be widely distributed in different ecological environments, including diverse special and extreme habitats of aquatic and terrestrial ecosystem (Qin et al., 2011;Dhakal et al., 2017;Goodfellow et al., 2018). Compared with actinobacteria from temperate habitats, the community structure, diversity, biological activities, and mechanisms of environmental adaptation of those actinobacteria in special and extreme environments are relatively unstudied and unclear, and their functions and utilization are even less reported. These actinobacteria are potential new sources of novel natural products and functions for exploitation in medicine, agriculture, and industry. It's exciting that there are more and more reports in this field recently. These discoveries make us consider some intriguing and new questions, such as, are actinobacteria ubiquitous in the special and extreme environment on Earth, and where are the limits of their survival? At the same time, the discovery of more and more pure cultures and new taxa of actinobacteria from extreme environments has raised new questions for the taxonomy of the phylum Actinobacteria. How can we establish a more accurate taxonomic system to reflect the natural evolutionary relationship of Actinobacteria? Moreover, how can we recognize the specific ecological functions of these ecologically adapted actinobacteria and their potential unique environmental adaptation mechanisms? Following the success of the Research Topic, "Actinobacteria in special and extreme habitats: diversity, functional roles, and environmental adaptations" (Qin et al., 2016), organized in 2015, we are happy to launch a second edition. More than 100 authors, from 14 different countries, contributed a total of 16 articles in this new edition, including one review paper and 15 original research articles, covering a variety of topics related to actinobacteria in special and extreme habitats. These articles addressed issues related to the cultivation methods of rare actinobacteria, metagenomic analyses of diversity, phylogenomic taxonomy, genome mining, bioactive compounds, and their habitat adaptation mechanism using omics approaches. We are grateful to all authors who have submitted their manuscripts to the second edition of this Research Topic. The special and extreme environments are likely to contain abundant rare actinobacteria and novel species. However, the acquisition of pure culture is a prerequisite for the further study of their classification and function. Caves spread all over the world, being dark, humid, and nutrient-limited. The cultivation of these cave microorganisms has proven to be challenging (Ghosh et al., 2017). An original article by Fang et al. explores the effects of heat pretreatment, pH, and calcium salts on isolation of rare actinobacteria from Karstic Caves in Yunnan, China. A total of 204 isolates were cultured, and the authors obtained a high number of 29 different rare actinobacterial genera. Actinobacteria from caves have been found to produce a variety of secondary metabolites. However, studies of microbial ecology in caves are still very limited. Recently, members of actinobacteria were reported to be possibly involved in the moonmilk genesis (Bindschedler et al., 2014). Interestingly, the article by Maciejewska et al. provides novel evidences that some filamentous Streptomyces could be key protagonists in the genesis of moonmilk through a wide spectrum of biomineralization processes. These studies enlarged our knowledge on cave actinobacterial diversity and their special ecological functions. Desert is the most extreme non-polar biome on Earth. Recent metagenomic analyses of hyper-arid and extreme hyper-arid desert soils revealed a remarkable degree of actinobacterial "dark matter" (Idris et al., 2017). The diversity of actinobacterial taxa in the Badain Jaran (BJD) and Tengger Deserts (TGD) of China were assessed using combined cultivation-dependent and highthroughput sequencing techniques (Sun et al.). These authors found that the phylum Actinobacteria was the predominant, comprising 35.0 and 29.4% of the communities in the two desert sands, respectively. Taxonomic classification of 1,162 actinobacterial strains revealed a high diversity of 73 genera, including 37 new taxa, and 10.36 % of the tested isolates showed antimicrobial activities (Sun et al.). However, their ecological significance in deserts deserved further exploration. Marine actinobacteria have attracted more and more attention because of their special physiological characteristics and capacity of producing various natural compounds with diverse bioactivities (Schinke et al., 2017). However, marine actinobacteria producing anti-complement agents are still poorly explored. Xu et al. analyzed the genome of a marine Streptomyces sp. DUT11, which showed a strong anti-complement activity, and isolated the active compounds tunicamycins I, V, and VII. Another marine actinobacterium, Glycomyces sediminimaris UTMC 2460, which showed anti-microfouling activity, was analyzed for its active compounds. These authors concluded that diketopiperazines produced by this strain could be used as environmentally safe anti-fouling agents to prevent the fouling process in marine habitats (Heidarian et al.). The article by Sun et al. reveals the marine adaptation mechanism of a spongederived actinobacterium, Kocuria flava S43, by comparative genomics analysis. These authors found that gene acquisition was probably a primary mechanism of environmental adaptation in K. flava S43 (Sun et al.). These studies indicated that marine actinobacteria are rich sources of diverse biological compounds. In this Research Topic, we collected five papers related to endophytic actinobacteria, which is also a research hotspot in recent years. Habitat-adapted, symbiotic, indigenous endophytic actinobacteria from special and extreme habitats probably contain novel taxa and compounds, and enhance their host tolerance of harsh environments (Mesa et al., 2017;Qin et al., 2018). The article by Singh and Dubey reviews the taxonomic and chemical diversity of endophytic actinobacteria in arid, mangrove, non-mangrove saline and aquatic ecosystems and discusses their potential biotechnological applications. Similarly, Jiang et al. explores the diversity and antibacterial activities of endophytic actinobacteria from five different mangrove plants in Guangxi Zhuang Autonomous Region, China; they found 28 actinobacterial genera and four potential new species. The two articles by Bibi et al. and Wei et al. report on the endophytic actinobacteria and their biological secondary metabolites from the halophyte Salsola imbricate and Chinese tea plants; their results confirm again that endophytic actinobacteria might be an undeveloped bioresource library for active compounds. Lasudee et al. report the actinobacteria associated with arbuscular mycorrhizal spores of Funneliformis mosseae, and explore their potential plant growth promotion effects in agriculture; results showed that the isolates could produce indole-3-acetic acid (IAA) and siderophores, solubilize phosphate, and promote rice plant growth. Genome sequencing and the phylogenomic strategy have been explored for the research of taxonomy and prokaryotic systematics. For instance, the class Acidimicrobiia is comprised of few cultivable species at present, containing only the order Acidimicrobiales, two families Acidimicrobiaceae and Iamiaceae with few genera (Ludwig et al., 2012). Hu et al. analyzed 20 sequenced members of this class and identified 15 conserved signature indels (CSIs) in widely distributed proteins and 26 conserved signature proteins (CSPs); the phylogenomic analysis revealed another three major lineages in addition to the two recognized families. Furthermore, Sangal et al. revisit the taxonomic status of the biomedically and industrially important genus Amycolatopsis, using a phylogenomic approach. According to the genome sequences analysis and the core genome phylogeny, genus Amycolatopsis was subdivided into four major clades and several singletons (Sangal et al.). These results indicate that whole genome sequencing analysis can provide more accurate taxonomic status for prokaryotes. The developments of omics methods have provided a robust support for our understanding of the actinobacterial adaptation mechanisms to the special and extreme habitats. Cornell et al. obtained 76 plasmid-containing isolates of actinomycetes from the Great Salt Plains of Oklahoma. Eleven isolates were chosen for genome sequencing, and the results revealed the presence of series genes involved in antibiotic production, antibiotic, and heavy metal resistance, osmoregulation, and stress response, which likely facilitate their survival in the extreme halophilic environment (Cornell et al.). By transcriptome analysis, physiological, and molecular experiments, Han et al. found that accumulation of ectoine played a vital role for the salt stress tolerance of the halotolerant Nocardiopsis gilva YIM 90087 T . The article by Yin et al. report that a hybrid strategy was used to utilize carbon sources at different temperatures by an aerobic, and cellulose degrading thermophilic actinomycete, Thermoactinospora rubra YIM 77501 T , by using combined genomic and transcriptomics methods. In summary, this Research Topic second edition presents recent discoveries on diversity, function roles, and environmental adaptations of actinobacteria in special and extreme habitats; and broadens our knowledge of actinobacterial diversity and their ecophysiological function. We are delighted to present this Research Topic in Frontiers in Microbiology. We hope that readers of the Journal will not only enjoy this Research Topic but also will find it a useful reference. Future research still looks forward to the innovation and application of new technologies, such as the application of single cell microfluidic technique to obtain new pure cultures. At the same time, the cooperation of different disciplines, and international cooperation of scientists from different countries should be strengthened. We also believe that in the future, more "dark matter" from actinobacteria in special and extreme environments will be discovered and utilized for the benefit of human beings.
2019-04-30T13:08:30.507Z
2019-04-30T00:00:00.000
{ "year": 2019, "sha1": "32c2b58a34cfb5486285950a99205cc48a15ce61", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2019.00944/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "32c2b58a34cfb5486285950a99205cc48a15ce61", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
122538621
pes2o/s2orc
v3-fos-license
Dynamic Flying Ant Colony Optimization (DFACO) for Solving the Traveling Salesman Problem This paper presents an adaptation of the flying ant colony optimization (FACO) algorithm to solve the traveling salesman problem (TSP). This new modification is called dynamic flying ant colony optimization (DFACO). FACO was originally proposed to solve the quality of service (QoS)-aware web service selection problem. Many researchers have addressed the TSP, but most solutions could not avoid the stagnation problem. In FACO, a flying ant deposits a pheromone by injecting it from a distance; therefore, not only the nodes on the path but also the neighboring nodes receive the pheromone. The amount of pheromone a neighboring node receives is inversely proportional to the distance between it and the node on the path. In this work, we modified the FACO algorithm to make it suitable for TSP in several ways. For example, the number of neighboring nodes that received pheromones varied depending on the quality of the solution compared to the rest of the solutions. This helped to balance the exploration and exploitation strategies. We also embedded the 3-Opt algorithm to improve the solution by mitigating the effect of the stagnation problem. Moreover, the colony contained a combination of regular and flying ants. These modifications aim to help the DFACO algorithm obtain better solutions in less processing time and avoid getting stuck in local minima. This work compared DFACO with (1) ACO and five different methods using 24 TSP datasets and (2) parallel ACO (PACO)-3Opt using 22 TSP datasets. The empirical results showed that DFACO achieved the best results compared with ACO and the five different methods for most of the datasets (23 out of 24) in terms of the quality of the solutions. Further, it achieved better results compared with PACO-3Opt for most of the datasets (20 out of 21) in terms of solution quality and execution time. Introduction The traveling salesman problem (TSP) [1] involves finding the shortest tour distance for a salesperson who wants to visit each city in a group of fully connected cities exactly once. TSP is a discrete optimization problem. It is a classic example of a category of computing problems known as NP-hard problems [2,3]. Although there are simple algorithms for solving these problems, these algorithms require exponential time, which makes them impractical for solving large-size problems. Hence, metaheuristic optimization algorithms are usually applied to find good solutions, although these solutions may not be optimal. The TSP problem can be used for modeling several wireless sensor network (WSN) problems [4]. In a WSN, the sensors are located in a sensing field to collect data, and send these data to the source node wirelessly. There are two ways to increase the lifetime of the sensors: first, by reducing the size and number of data [5,6], and second, by reducing the cost of transferring the data [4]. For example, a good solution to the TSP problem can also be considered an efficient diffusion method for reducing the transferring cost. Many methods, including heuristic or hybrid, have been proposed for solving the TSP, but most of them were unable to avoid the stagnation problem, or they may have obtained good solutions but took a long execution time to do so [7]. In this work, we enhanced the ant colony optimization (ACO) algorithm based on imaginary ants that can fly. These ants deposit their pheromones with neighboring nodes while flying by injecting them from a distance. This allows not only the nodes on a good path to receive some pheromones but also the neighboring nodes. The algorithm also makes use of the 3-Opt algorithm to help avoid reaching a local minimum. The main contributions of this work on the flying ACO (FACO) algorithm are as follows: (1) proposing a dynamic neighboring selection mechanism to balance between exploration and exploitation, (2) reducing the execution time of FACO by making flying ants equal to half the ants, and (3) adapting the flying process to work with the TSP problem. The main contributions of this work in general are as follows: (1) obtaining better-quality solutions, (2) significantly reducing the execution time, and (3) avoiding getting stuck at a local minimum. The paper is organized as follows: in Section 2, we discuss related works; Section 3 presents the proposed enhancement of the ACO algorithm; Section 4 shows the experimental results; and the conclusion is shown in Section 5. Related Work This section reviews some important and more recent works in this area. Metaheuristic Solutions The TSP has been widely used as a benchmark problem to evaluate many metaheuristic and nature-inspired algorithms. Chen and Chien [8] presented a hybrid method using simulated genetic annealing, ACO, and particle swarm optimization. Each algorithm performs a specific task, where ACO generates the initial solutions for the genetic simulated annealing algorithm, which searches for better solutions based on the initial solutions. Then, the better solutions are used to update the pheromone trails. Finally, the particle swarm optimization algorithm exchanges the pheromone information after a predefined number of cycles. Deng et al. [9] presented a hybrid method that combined a genetic algorithm and ACO. They used a multipopulation strategy to enhance the local search. In addition, they used chaotic optimization to avoid the ACO slow convergence problem. They controlled the trade-off between exploration and exploitation by using an adaptive control strategy to distribute the pheromones uniformly. Eskandari et al. [10] proposed a local solution enhancement for ACO based on mutation operators. The local solution is mutated to generate a new solution and keeps it if it is better than the original solution. The mutation operators include swap, insertion, and reversion. A comparison between ACO and cuckoo search (CS) algorithms was conducted in [11] for solving the TSP. In this comparison, only five city plans were used. Mavrovouniotis et al. [12] used local search operators to support the ACO algorithm. This new method was used for dynamic TSP. The best solution from ACO is passed to local search operators for removing and inserting cities to generate a new solution. Alves et al. [13] introduced an adapted ACO algorithm based on social interaction called social interaction ant colony optimization (SIACO). The social interaction was introduced to enhance pheromone deposition. Han et al. [14] proposed a niching ant colony system (NACS) algorithm. This algorithm enhances the ACO algorithm in two ways: it applies a niching strategy and uses multiple pheromone deposition. Pintea et al. [15] introduced an enhancement for ACO based on clustering, where the cities are divided into clusters and ACO is used to find the minimum cost for each cluster. Xiao et al. [16] proposed a multistage ACO algorithm that reduces the initial pheromone concentration based on the nearest-neighbor method. Then, the mean cross-evolution strategy is used to enhance the solution space. Zhou et al. [17] proposed an ACO algorithm for large-scale problems that utilizes graphics processing units. The 3-Opt algorithm is widely used to enhance the local search of metaheuristic algorithms. Mahi et al. [18] presented a hybrid algorithm using particle swarm optimization and ACO. The particle optimizes the α and β parameters, which affect the performance of ACO. The 3-Opt algorithm is then used to avoid the stagnation problem. Gülcü et al. [7] introduced a parallel cooperative method, which is a hybridization of the parallel ACO (PACO) and the 3-Opt algorithm. The proposed algorithm is named as PACO-3Opt and it uses a parallel set of colonies. These multiple colonies have a master-slave paradigm. The 3-Opt algorithm is used by these colonies based on a predefined number of iterations. Khan et al. [19] used 2-Opt and 3-Opt with an artificial bee colony (ABC) algorithm. Also, they created a new different path by combing swap sequences with ABC. Ouaarab et al. [20] proposed a discretized version of the CS algorithm and also added a new cuckoo category. This new category aims to manage the exploration and exploitation by using Lévy flights and multiple searching methods. Osaba et al. [21] presented a discrete version of the bat algorithm where each bat moves based on the best bat. If a bat is located far from the best bat, then the movement will be large, but if it is close to the best bat, the step will be small. Choong et al. [22] introduced a hybrid algorithm of the ABC algorithm and modified choice function. The modified choice function is used to regulate the neighborhood selection of employed and onlooker bees. Many works in the literature present improvements on existing algorithms by suggesting methods to control the tradeoff between exploitation and exploration, which are the two main steps that form the basis of many metaheuristic approaches and nature-inspired algorithms. In exploitation, the accumulated knowledge about the search space is used to guide the search, while in exploration, risk is taken to explore the unfamiliar region of the search space, in the hope that this region may contain a solution better than the known solutions [23]. Researchers usually use hybrid methods to merge different algorithms' capabilities; however, new methods can become too complex and sometimes even incomprehensible. By contrast, we aimed in this work to enhance the ACO algorithm by adding extra procedures while keeping the method understandable and easy to use. Opt Algorithm The 3-Opt algorithm was introduced to solve the TSP. It exchanges three edges from the old tour by another three edges to produce a new tour [24] and retains the new tour if it is better. The process is repeated until no further improvement is found. 3-Opt is a local search algorithm; therefore, it is used to help ACO avoid local minima [7] by optimizing the solutions locally [25]. There are n 3 . possible combinations to replace three edges from the tour with n cities [26]. For instance, from three edges, we can obtain eight possible combinations, as shown in Figure 1 where (a) to (h) represent these eight combinations [27]. The 3-Opt algorithm is widely used to enhance the local search of metaheuristic algorithms. Mahi et al. [18] presented a hybrid algorithm using particle swarm optimization and ACO. The particle optimizes the α and β parameters, which affect the performance of ACO. The 3-Opt algorithm is then used to avoid the stagnation problem. Gülcü et al. [7] introduced a parallel cooperative method, which is a hybridization of the parallel ACO (PACO) and the 3-Opt algorithm. The proposed algorithm is named as PACO-3Opt and it uses a parallel set of colonies. These multiple colonies have a master-slave paradigm. The 3-Opt algorithm is used by these colonies based on a predefined number of iterations. Khan et al. [19] used 2-Opt and 3-Opt with an artificial bee colony (ABC) algorithm. Also, they created a new different path by combing swap sequences with ABC. Ouaarab et al. [20] proposed a discretized version of the CS algorithm and also added a new cuckoo category. This new category aims to manage the exploration and exploitation by using Lévy flights and multiple searching methods. Osaba et al. [21] presented a discrete version of the bat algorithm where each bat moves based on the best bat. If a bat is located far from the best bat, then the movement will be large, but if it is close to the best bat, the step will be small. Choong et al. [22] introduced a hybrid algorithm of the ABC algorithm and modified choice function. The modified choice function is used to regulate the neighborhood selection of employed and onlooker bees. Many works in the literature present improvements on existing algorithms by suggesting methods to control the tradeoff between exploitation and exploration, which are the two main steps that form the basis of many metaheuristic approaches and nature-inspired algorithms. In exploitation, the accumulated knowledge about the search space is used to guide the search, while in exploration, risk is taken to explore the unfamiliar region of the search space, in the hope that this region may contain a solution better than the known solutions [23]. Researchers usually use hybrid methods to merge different algorithms' capabilities; however, new methods can become too complex and sometimes even incomprehensible. By contrast, we aimed in this work to enhance the ACO algorithm by adding extra procedures while keeping the method understandable and easy to use. Opt Algorithm The 3-Opt algorithm was introduced to solve the TSP. It exchanges three edges from the old tour by another three edges to produce a new tour [24] and retains the new tour if it is better. The process is repeated until no further improvement is found. 3-Opt is a local search algorithm; therefore, it is used to help ACO avoid local minima [7] by optimizing the solutions locally [25]. There are 3 possible combinations to replace three edges from the tour with n cities [26]. For instance, from three edges, we can obtain eight possible combinations, as shown in Figure 1 where (a) to (h) represent these eight combinations [27]. The experiments reported in [26,28] show that combining the 3-Opt algorithm with other metaheuristic algorithms improves the solutions found by these algorithms. This is because the 3-Opt algorithm mitigates the effect of local minima [28]. Ant Colony Optimization ACO was inspired by the way real ants forage for food. Initially, real ants forage for food randomly, depositing a chemical substances called pheromones on their paths. The path between the colony and the nearest food source tends to receive more pheromones, which attracts more ants to The experiments reported in [26,28] show that combining the 3-Opt algorithm with other metaheuristic algorithms improves the solutions found by these algorithms. This is because the 3-Opt algorithm mitigates the effect of local minima [28]. Ant Colony Optimization ACO was inspired by the way real ants forage for food. Initially, real ants forage for food randomly, depositing a chemical substances called pheromones on their paths. The path between the colony and the nearest food source tends to receive more pheromones, which attracts more ants to follow the same path. Once the food source is exhausted, the ants abandon the path and the pheromones evaporate, forcing the ants to start searching for another food source randomly. The ACO algorithm [28][29][30] simulates the foraging behavior of real ants. It initializes all ants randomly, and each ant searches for a potential solution. In addition, ACO assigns an amount of pheromone to each edge of the solution path that is proportional to the quality of the solution. In each iteration, each ant moves to unvisited nodes in order to construct a potential solution. The next node to visit is selected according to a probability distribution that favors the nodes with large amounts of pheromones (τ ij ). ACO also takes into account a local heuristic function (η ij ). Equation (1) shows the solution generation formula which computes the probability of selecting the edge from nodes i to j: where α and β are coefficient parameters that determine the importance of the pheromone value (τ ij ) and the value of the local heuristic (η ij ). The local heuristic η ij is problem dependent. For the TSP, it is defined as 1 divided by the length of the edge between nodes i and j. N i k is the list of unvisited nodes from node i by ant k. The ants update the pheromones locally using Equation (2). This update is called the local pheromone update: where τ 0 is the initial pheromone value, and ρ is the evaporation rate. ACO selects the best solution greedily. The best ant updates the pheromone trails on its path using Equation (3). This process is called the global pheromone update: where L gb (t) is the tour length for the best ant. These ACO algorithm processes are illustrated in Figure 2 [31]. The ACO algorithm suffers from a stagnation problem [32] because the amounts of pheromone are accumulated on the explored paths, and as a result, the chances of exploring other paths decrease. The FACO algorithm, discussed next, addresses this issue. Flying Ant Colony Optimization In [33], a flying ant colony algorithm was proposed to solve the quality of service (QoS)-aware web service composition problem. Web service composition involves selecting the best combination of web services, where each service is selected from a set of candidate services that fulfill a certain task. The solutions are evaluated according to a set of QoS properties, such as reliability, cost, response time, and availability. The algorithm assumes that, in addition to walking normal ants, there are also flying ants. Flying ants inject their pheromones from a distance, so that not only the nodes on the path receive some pheromones but also their neighboring nodes. The amount of pheromone a neighboring node receives is inversely proportional to the distance between it and the node on the path. This makes the ants more likely to explore these nodes during future iterations, which encourages exploration. Since determining the nearest nodes might be an expensive iteration in terms of execution time, we only considered the ant that finds the best solution as a flying ant in each iteration. The rest of the ants are dealt with in the usual way. The flying ant then determines the nearest neighboring nodes (web services) by using Equation (5) to calculate the distance between two web services and . Equation Flying Ant Colony Optimization In [33], a flying ant colony algorithm was proposed to solve the quality of service (QoS)-aware web service composition problem. Web service composition involves selecting the best combination of web services, where each service is selected from a set of candidate services that fulfill a certain task. The solutions are evaluated according to a set of QoS properties, such as reliability, cost, response time, and availability. The algorithm assumes that, in addition to walking normal ants, there are also flying ants. Flying ants inject their pheromones from a distance, so that not only the nodes on the path receive some pheromones but also their neighboring nodes. The amount of pheromone a neighboring node receives is inversely proportional to the distance between it and the node on the path. This makes the ants more likely to explore these nodes during future iterations, which encourages exploration. Since determining the nearest nodes might be an expensive iteration in terms of execution time, we only considered the ant that finds the best solution as a flying ant in each iteration. The rest of the ants are dealt with in the usual way. The flying ant then determines the nearest neighboring nodes (web services) by using Equation (5) to calculate the distance between two web services x and y. Equation (5) considers two web services that are similar (close to each other) if they have similar QoS properties: where C, RT, A, and R are the cost, response time, availability, and reliability, respectively; x is the best web service obtained by the best ant in task t i+1 ; y is one of the neighboring web services to x in task t i+1 ; and x y. The nearest neighbors receive an amount of pheromone that is inversely proportional to their distance from the best node. Equation (6) describes how much pheromone a node receives: represents the pheromone trails from the global pheromone update. d η (i,x )(i,l) is the normalized distance between web services in task i + 1 and its neighbor web service l in the same task. The distance (local heuristic) η ij . is normalized according to Equation (7): Figure 3 illustrates the process of the FACO algorithm [33], which was added to the process of ACO. Dynamic Flying Ant Colony Optimization (DFACO) Algorithm Many methods, including heuristic or hybrid, have been proposed for solving the TSP, but most of them cannot avoid the stagnation problem, or they may obtain solutions but take a long execution time [7]. In this work, we proposed an enhanced ACO algorithm that finds better solutions in less computation time and a robustness mechanism to avoid the stagnation problem. In this section, we present a modification of the FACO algorithm to make it suitable for the TSP Dynamic Flying Ant Colony Optimization (DFACO) Algorithm Many methods, including heuristic or hybrid, have been proposed for solving the TSP, but most of them cannot avoid the stagnation problem, or they may obtain solutions but take a long execution time [7]. In this work, we proposed an enhanced ACO algorithm that finds better solutions in less computation time and a robustness mechanism to avoid the stagnation problem. In this section, we present a modification of the FACO algorithm to make it suitable for the TSP problem in the following ways. First, the number of neighboring nodes in FACO were static based on the experiments. However, the number of neighbors in DFACO that may be injected with pheromones was dynamic. The number of neighboring nodes varied in each iteration depending on the quality of the best solution reached so far compared to the other solutions. The number of neighbors was determined based on two cases: (1) If the best solution is slightly better than the other solutions, then the number of neighbors should be large to obtain more neighbors. This encourages exploration in future iterations. (2) If the best solution is considerably better than the other solutions, then the number of neighbors should be small to encourage exploitation in future iterations. The number of nearest neighbors, NS, was determined according to the following formula: where L gb (t) is the tour length of the global best tour at time t, L kl (t) is the tour length of ant k at time t, S is the number of ants, and N is the number of cities. The second modification aimed to reduce the execution time. If all ants were flying ants (such as in FACO), we would have to determine many neighbors for each node on the best path, which may substantially increase the execution time. Therefore, here, only 50% of the ants were flying ants and the rest were normal walking ants. The third modification aimed to encourage exploration at early stages and exploitation at later stages. This modification is similar to the FACO algorithm, but we modified the process to adapt it to the TSP problem. The intuition behind this is that at early stages, we have no idea about the location of the best solution in the search space; and therefore, ants should be encouraged to explore the search space. On the other hand, at later iterations, the region that contains the best solution is more likely to have been located, and therefore, exploitation should be encouraged. This was achieved by injecting pheromones at farther neighbors during early iterations, while at later iterations, we injected the pheromones at only the nearest neighbors. Equation (6) was used to determine the amount of pheromone for each neighbor. As a final modification, we embedded the 3-Opt algorithm in FACO to reduce the chances of getting stuck at a local minimum. Figure 4 shows the complete algorithm in detail, and Figure 5 illustrates the process of the DFACO algorithm. Experimental Results We conducted empirical experiments using TSP datasets from TSPLIB [34] to test the performance of the proposed algorithm. We performed three comparisons. First, we compared DFACO combined with the 3-Opt algorithm with ACO combined with the 3-Opt algorithm using 24 datasets. Second, we compared DFACO performance with PACO-3Opt [7] in detail using 21 datasets. Third, we compared DFACO performance with five more recent methods [8,9,18,20,21] using 24 datasets. We compared the methods with respect to the average of the best solutions for all runs (Mean), the standard deviation (SD), and the best solution for each run (Best). We also compared DFACO, PACO-3Opt, and ACO with respect to execution time in seconds. Table 1 lists the parameter values of both algorithms, which were empirically determined. These values were also used to compare DFACO to all of the other methods. Both ACO and DFACO were run for 100 iterations (Z) and each dataset was used in 30 independent experiments. Table 2 lists the comparison results for DFACO and ACO. The first column shows the name of the TSP datasets. The second column shows the best-known solution (BKS) as reported on the TSPLIB website (http://comopt.ifi.uni-heidelberg.de/software/TSPLIB95/STSP.html). Bold font indicates the best results. Table 2 indicates that DFACO was able to find the BKS in all runs for 16 datasets with zero SD, while ACO was able to find the BKS in all runs for 15 datasets with zero SD. However, the proposed method obtained the BKS in all runs faster than ACO for four datasets (bier127, ch130, kroB150, and kroA200), while ACO obtained the BKS faster than DFACO for two datasets (eil101 and ch150). Figure 6 shows the average of the best solutions, while Figure 7 presents the execution time in seconds for all datasets in the table. As can be seen in Figure 6, DFACO was better than ACO in terms of the average of the best solutions and obtains a shorter distance on average for all datasets. Also, DFACO was faster than ACO by 54.7 s for all datasets. Table 2 indicates that DFACO was able to find the BKS in all runs for 16 datasets with zero SD, while ACO was able to find the BKS in all runs for 15 datasets with zero SD. However, the proposed method obtained the BKS in all runs faster than ACO for four datasets (bier127, ch130, kroB150, and kroA200), while ACO obtained the BKS faster than DFACO for two datasets (eil101 and ch150). Figure 6 shows the average of the best solutions, while Figure 7 presents the execution time in seconds for all datasets in the table. As can be seen in Figure 6, DFACO was better than ACO in terms of the average of the best solutions and obtains a shorter distance on average for all datasets. Also, DFACO was faster than ACO by 54.7 s for all datasets. Table 2. For the remaining eight datasets, DFACO found the best Mean solution for seven datasets, while ACO found the best Mean solution for one dataset (fl1400). These results were found to be statistically significant according to the Wilcoxon signed-rank test, with N = 8 and p ≤ 0.05. A t-test was used to see if the results were statistically significant in the 30 independent runs for each one of the eight datasets. The results indicate that the proposed method's results were statistically significant for four out of eight datasets, namely, for the datasets rat575, rat783, rl1323, and d1655. We did not perform a t-test for the remaining 16 datasets because both algorithms achieved the BKS. Table 2. Table 2 indicates that DFACO was able to find the BKS in all runs for 16 datasets with zero SD, while ACO was able to find the BKS in all runs for 15 datasets with zero SD. However, the proposed method obtained the BKS in all runs faster than ACO for four datasets (bier127, ch130, kroB150, and kroA200), while ACO obtained the BKS faster than DFACO for two datasets (eil101 and ch150). Figure 6 shows the average of the best solutions, while Figure 7 presents the execution time in seconds for all datasets in the table. As can be seen in Figure 6, DFACO was better than ACO in terms of the average of the best solutions and obtains a shorter distance on average for all datasets. Also, DFACO was faster than ACO by 54.7 s for all datasets. Table 2. For the remaining eight datasets, DFACO found the best Mean solution for seven datasets, while ACO found the best Mean solution for one dataset (fl1400). These results were found to be statistically significant according to the Wilcoxon signed-rank test, with N = 8 and p ≤ 0.05. A t-test was used to see if the results were statistically significant in the 30 independent runs for each one of the eight datasets. The results indicate that the proposed method's results were statistically significant for four out of eight datasets, namely, for the datasets rat575, rat783, rl1323, and d1655. We did not perform a t-test for the remaining 16 datasets because both algorithms achieved the BKS. Table 2. For the remaining eight datasets, DFACO found the best Mean solution for seven datasets, while ACO found the best Mean solution for one dataset (fl1400). These results were found to be statistically significant according to the Wilcoxon signed-rank test, with N = 8 and p ≤ 0.05. A t-test was used to see if the results were statistically significant in the 30 independent runs for each one of the eight datasets. The results indicate that the proposed method's results were statistically significant for four out of eight datasets, namely, for the datasets rat575, rat783, rl1323, and d1655. We did not perform a t-test for the remaining 16 datasets because both algorithms achieved the BKS. For the second set of comparisons, we followed the comparison method of PACO-3Opt [7]. In the PACO-3Opt experiments, the TSP datasets were divided based on the problem size into small-scale and large-scale datasets (the size of the large-scale datasets was between 400 and 600). The small-size datasets used 10 TSP datasets (shown in Table 3), and for the large size, 11 TSP datasets were used (shown in Table 4). Table 3 presents the experimental results of comparing DFACO with PACO-3Opt for small-scale TSP instances. The table shows the best, worst, and average values of both algorithms for comparison. Boldface text indicates the better results for both algorithms. The results reveal that the DFACO obtained the optimum distances for all datasets in terms of best, worst, and average values. Meanwhile, PACO-3Opt obtained the optimum distances for only six datasets, one dataset, and one dataset in terms of best, worst, and average, respectively. With regard to execution time, Table 3 shows that DFACO significantly reduced the execution time and obtained better results faster than PACO-3Opt for all TSP instances. Table 4 presents the results of comparing DFACO with PACO-3Opt for large-scale TSP instances. It also shows the best, worst, and average values of both algorithms for comparison. Boldface text indicates the better results for both algorithms. The results reveal that DFACO obtained better distances for all datasets in terms of best, worst, and average values, except for rat783. With regard to execution time, Table 4 shows that DFACO significantly reduced the execution time and obtained better results faster than PACO-3Opt for all TSP instances except for rat783, where DFACO was faster than PACO-3Opt but did not obtain better results. Table 5 shows the third type of comparison between DFACO and five recent methods [5][6][7][8][9]. In this set of experiments, we used 24 TSP datasets and 30 independent runs. The table reveals that the proposed algorithm achieved the best results for all datasets except one (rat783), for which Deng's method [9] achieved the best result. Also, DFACO found the BKS for 18 datasets, and for one dataset (rat575), it found an even better solution than the BKS. Figures 8-31 show the average of the best solutions for each TSP instance obtained by different algorithms. These figures visualize the results of the 24 datasets shown in Table 5. From these figures, it is clear that DFACO's performance was better than that of the other algorithms for most datasets except rat783. Table 6 compares DFACO with five recent methods with respect to the percentage deviation of the average solution to the BKS value (PDav) and the percentage deviation of the best solution to the BKS value (PDbest) in the experimental results. PDav and PDbest were calculated using Equations (9) and (10), respectively: 100. Table 6 compares DFACO with five recent methods with respect to the percentage deviation of the average solution to the BKS value (PDav) and the percentage deviation of the best solution to the BKS value (PDbest) in the experimental results. PDav and PDbest were calculated using Equations (9) and (10), respectively: 100, 100. Table 6 compares DFACO with five recent methods with respect to the percentage deviation of the average solution to the BKS value (PDav) and the percentage deviation of the best solution to the BKS value (PDbest) in the experimental results. PDav and PDbest were calculated using Equations (9) and (10), respectively: The results revealed that the values of PDav and PDbest for DFACO were better than those for the other methods on all datasets except one (rat783), for which Deng's method [9] was better. Conclusions This paper proposed a modified FACO algorithm for the TSP. We modified FACO in several ways to reduce the execution time and to better balance exploration and exploitation. For example, the number of neighboring nodes receiving pheromones varied depending on the quality of the solution compared to the other solutions. This helped to balance the exploration and exploitation strategies. We also embedded the 3-Opt algorithm to improve the solution by mitigating the effect of the stagnation problem. Moreover, the colony contained a combination of normal and flying ants. These modifications aimed to achieve better solutions with less processing time and to avoid getting stuck in local minima. DFACO was compared with (1) ACO and five recent methods for the TSP [8,9,18,20,21] using 24 TSP datasets and (2) PACO-3Opt using 22 TSP datasets. Our empirical results showed that DFACO achieved the best results compared to ACO and the five different methods for most datasets (23 out of 24) in terms of solution quality. Also, DFACO achieved the best results compared with PACO-3Opt for most datasets (20 out of 21) in terms of solution quality and the execution time. Furthermore, for one dataset, DFACO achieved a better solution than the best-known solution.
2019-04-20T13:03:22.318Z
2019-04-01T00:00:00.000
{ "year": 2019, "sha1": "5af208d3907de8036800a4c00a780b9cb85b8b08", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/19/8/1837/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5af208d3907de8036800a4c00a780b9cb85b8b08", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
233462170
pes2o/s2orc
v3-fos-license
Is “Esterhazy II”, an Old Walnut Variety in the Hungarian Gene Bank, the Original Genotype? The old walnut (Juglans regia L.) genotype called “Esterhazy II” was well-known in the Austro-Hungarian Monarchy before World War II, and it can still be found in the Austrian, German and Swiss backyard gardens today. Unfortunately, nowadays, vegetatively propagated progenies of the original “Esterhazy II” are not available anymore around the world because walnut grafting started later than this genotype had become well-known. Although various accessions with “Esterhazy II”-“blood“ are available, it is difficult to determine which one can be considered true or the most similar to the original one. In this paper, phenological and nut morphological characteristics of an “Esterhazy II” specimen planted in a Hungarian gene bank were compared to the varieties “Milotai 10” and “Chandler”. Examined characteristics were: budbreak, blossom time, type of dichogamy, ripening time, nut and kernel features. An additional SSR fingerprinting was used to identify identical genotypes and to demonstrate the relatedness of the analyzed “Esterhazy II” genotype to the other Hungarian walnut cultivars. It can be concluded that under the name “Esterhazy II”, several different genotypes can be observed. All the checked characteristics except budbreak fitted well with the previous descriptions. Our results confirmed that the examined “Esterhazy II” genotype shows high similarity to the “original“ “Esterhazy II” described in the literature. Introduction The basis of the Hungarian walnut assortment is provided by the selected varieties derived from the Carpathian race [1,2], which selections are thought to be native/indigenous [3]. These genotypes have unique characteristics [4], such as early ripening time and large fruit size [5]. Today, the most popular Hungarian bred walnut cultivar is the terminal bearing "Milotai 10" [6]. In the past, some other, old walnut genotypes were the standards, such as "Sebeshelyi gömbölyű", "Sebeshelyi hosszú", "Nagybányai" from the territory of Transylvania (Romania) [7], "Milotai" and "Tarpai" from the Hungarian territory [3], and "Franquette" from France [8]. Unfortunately, old cultivars are missing from the contemporary breeding programs of many countries. However there are also some exceptions such as China [9,10], Hungary [2], Iran [11], Kazakhstan [12], New Zealand [13], Romania [14,15], Serbia [16], Slovenia [17] and the United States of America [18]. Apart from the previously mentioned old walnut genotypes, Esterhazy II (syn: Eszterházy II, Eszterházi 2, Estherházai II, E II) still has importance. This genotype was well-known in the territory of the former Austro-Hungarian Monarchy, and it is still popular nowadays, especially in the German-speaking countries. Numerous German, Austrian, and Swiss nurseries sell their "own" genotypes labeled as "Esterhazy II". However, the very original "Esterhazy II", being grown from the beginning of the 19th century, is possibly not available anymore since vegetative walnut propagation started later (e.g., in Hungary in the 1970s) than this genotype had become flourishing and widespread. Even if "selected" "Esterhazy II" genotypes can be found in some core collections worldwide, they have a probable seed origin, therefore, may be different from each other. This paper aims to describe the phenological and fruit morphological characteristics of one "Esterhazy II" specimen, planted in the core collection of the Hungarian University of Agriculture and Life Sciences (HUALS) Research Institute for Fruit Growing and Ornamentals, and to compare it with the two most widespread varieties, "Milotai 1" and "Chandler". The genotype selected for the study was derived from the initial growing place of the "Esterhazy II" variety from the gene bank in Fertőd. Thus this genotype was considered as a putatively original type. To assess the identity of the various specimens of "Esterhazy II", compared with the most important walnut varieties in Hungary, genetic fingerprint analysis is conducted, as well. Finally, one of the most important goals of the study is to rediscover and restore this historical variety for the future. Phenology The variety showing the earliest budbreak in the described trial was "Milotai 1", followed by "Chandler" and "Esterhazy II". There was a statistical difference at the beginning of budbreak among all the three observed varieties, but concerning the end of budbreak significant difference was found only in the case of "Esterhazy II" ( Table 1). The first male catkins of "Chandler" and the first female flowers of "Milotai 1" started to open the earliest, on the same day in the blooming period. "Esterhazy II" was the latest regarding both the first female and first male blooms. "Esterhazy II" and "Chandler" had protandrous dichogamy, but "Milotai 1" had the opposite, protogynous flowering. The beginning and the end of the flowering period, as well as the length of the first male bloom of "Esterhazy II" showed significant differences compared to the standards. However, the beginning and end of the pistillate opening period of "Esterhazy II" were statistically different from "Milotai 1", but there was no statistical difference between "Chandler" and "Esterhazy II". Regarding the length of the first female bloom, only "Chandler" differed statistically (Tables 2 and 3, Supplementary Information- Figure S1). The harvest time of "Esterhazy II" was between "Milotai 1" and "Chandler", at the beginning of the third week of September (Table 4). Nut Characteristics Nut size is an important feature in the descriptions of walnut varieties and genotypes. Nut height of "Esterhazy II" and "Chandler" were equivalent and differed significantly from "Milotai 10". The most important characteristic is the nut diameter; "Esterhazy II" reached the significantly largest value among the observed varieties. Again, in the case of nut width, "Esterhazy II" produced the significantly largest value among the examined cultivars ( Figure 1). Varieties can also be distinguished and determined based on the fruit shape. Roundness indices of the observed varieties were very similar, but still, "Esterhazy II" and "Milotai 10" showed significant differences compared to "Chandler". Shell thickness values varied between 1.56 and 1.70 mm, where "Esterhazy II" produced the thickest shells; by contrast, "Milotai 10" and "Chandler" had thinner shells and their values differed statistically ( Figure 1, Supplementary information- Figures S2 and S3). One main aim of walnut breeding is to obtain higher nut production; in other words, many nuts with good nut weight. In our experiment, the dried whole nut weights varied between 11.9 and 14.5 g/nut. The kernel weight was in the range of 5.5 and 6.7 g/kernel. The highest dried nut and kernel weights were measured in the case of "Esterhazy II"; both parameters were significantly higher than the measured values of the other two varieties ( Figure 2). Finally, kernel characteristics determine the successful production of a variety. The ratio of kernel weight compared to the whole fruit weight was in the range of 45.0% and 46.1% for all three varieties. The rate of halves resulted in larger differences (between 73.2% and 86.7%), however. "Chandler" reached the highest value in the category of cracking rate, followed by "Esterhazy II" and "Milotai 10". There was no significant difference between the varieties examined in this trial regarding this parameter ( Figure 3). Shell and kernel color, especially lightness, is an important feature from the aspect of market value. L values of "Chandler" and "Milotai 10" were significantly higher compared to "Esterhazy II". The "a" values of "Milotai 10" were the highest and showed a significant difference from "Esterhazy II". The "b" values of "Chandler" were statistically different from "Esterhazy II" (Figure 4). , where the "L" value signals lightness from black (0) to white (100), "a" from green (-) to red (+), "b" from blue (-) to yellow (+). (SD 5%L = 1.4, SD 5%a = 0.6, SD 5%b = 1.5). Varieties being not significantly different from each other at SD 5% are indicated with the same letter. Genetic Analysis Additionally, genetic fingerprint analysis was also conducted since several different specimens are available as possible "Esterhazy II" genotypes besides the analyzed one in this study. However, the genetic uniqueness of these gene bank accessions has never been checked. Our genetic analysis resulted in distinct, unique genotypes in the case of all the selected samples only with one exception, as it was initially suspected. The two "Esterhazy II" specimens from Fertőd are genetically identical, propagated by grafting. Otherwise, under the name "Esterhazy", several different genotypes can be observed. In general, it can be concluded that the method, with the, applied eight SSR markers, was appropriate for genetic fingerprinting as the P ID value (in other words, the probability of random matching) was very low (1.6 × 10 −5 ). All of the eight analyzed SSR markers proved to be polymorphic. The main genetic indices are presented in Table S1. The UPGMA dendrogram constructed based on the genetic distance matrix of 18 walnut varieties demonstrates the separation of the analyzed E II (3) specimen from the other genotypes ( Figure 5). The trees indicated as E II (1), (2), (3) in the gene bank has a closer relationship with "Alsószentiváni 117" lineages, while the E II and E I trees from Fertőd form a distinct group together with the Hungarian varieties ("Tiszacsécsi 83" and "BD6") originating from near the Tisza river. Discussion According to the variety descriptions, "Esterhazy II" should have an early budbreak time [19][20][21][22][23][24]. However, former phenology descriptors are not always exact due to the lack of information about the other varieties that had been compared. Nevertheless, it is shown that the "Esterhazy II" genotype, planted in our core collection, has a later budbreak than "Chandler". There was a significant difference regarding the beginning of budbreak among all the three observed varieties (Table 1), while in the case of other phenology characters, namely the end and length of blooming, the three examined varieties were not statistically different. Hence, in respect of the late budbreak feature of the examined genotype, there is a discrepancy between our observations and the literature data. Descriptions also mentioned that "Esterhazy II" had a protandrous flowering system with long homogamy. As our genotype fits well with these characteristics ( Figure S1, Tables 2 and 3), it can be stated that the genotype may be "Esterhazy II". Furthermore, the harvest time of "Esterhazy II" (Table 4) also matches the data of the former descriptions [19][20][21][22][23][24]. According to the literature, nuts of "Esterhazy II" have an ovate shape ("egg shape") with a medium-long tip [19][20][21][22][23][24]. Measurements made in this trial are following this information, as our genotype provided the highest values of nut height and nut diameter (Figures 1 and S2). Nuts from our samples reached the nut height and diameter described in the literature [19][20][21][22][23][24]. Thus, the examined genotype from our gene bank may be true to "Esterhazy II," also based on the fruit size data. The average dried nut weight of "Esterhazy II" from our gene collection varied from 12.7 to 16.3 g/nut. Only one reference [24] mentioned a hint regarding this data, where one kg of dried nuts with shell contained on average 84 nuts. Based on estimations, one kg of dried nuts from the trial should have 62 to 79 nuts/kg. Even if the calculated number of dried nut pieces per kg did not reach the described value, the "Esterhazy II" genotype from our gene bank proved to produce the significantly highest dried nut and kernel weights ( Figure 5). The average kernel rate of the analyzed "Esterhazy II" specimen reached 45.5% (Figure 3), which is a bit lower than the value of 49% mentioned elsewhere [20][21][22][23][24]. Our "Esterhazy II" holds another positive characteristic, which is easy kernel removal, described in the literature [20][21][22][23][24] as well, which resulted in the measured high cracking rate (Figure 3). Kernels of our "Esterhazy II" had a light color, and it differed significantly from "Milotai 10" and "Chandler" also from this aspect (Figure 4). Furthermore, "Esterhazy II" from our collection had an outstanding flavor without bitter aftertaste due to the missing tannin compounds from the kernel tissues [25]. The results of the genetic analysis confirm the previous presumption that the original "Esterhazy II" genotype has various descendants that should be considered as different genotypes in the future, and it would even be worth considering the overwriting of the nomenclature in the case of this variety series. On the other hand, the available "Esterhazy" specimens can serve as a rediscovered gene pool for breeding. For instance, the "Esterhazy II" genotype, planted in our core collection and analyzed here, had been previously selected for a trial due to its remarkable characteristics. Even if the aim of the genetic analysis presented here was only to check the identity of the various "Esterhazy" specimens, still a cautious inference could be made concerning the origin of the linage. It seems rather probable that the variable holds a considerable part in its gene pool from local ancestors. Therefore, it can stand close to the Carpathian race. However, to reveal the roots of this old walnut cultivar-would it be either a French origin, as it was previously stated in the literature [26], or rather a local, Carpathian lineage-a more detailed genetic analysis, including several reference genotypes from both populations should be accomplished in a future study. Materials and Methods Besides "Esterhazy II", two other varieties were included in the study. "Milotai 10" is important in the Central European countries, while "Chandler" is among the most grown varieties of the world. By selecting these two varieties, a more comprehensive description can be made regarding several phenological, morphological, and horticultural traits of "Esterhazy II". "Esterhazy II" Genotype The Esterhazy II genotype was selected at the beginning of the 19th century from a horticultural plantation in the former Esterhazy estate (today called Fertőd), located in Northwest Hungary, near the Hungarian-Austrian border, and possibly derived from a French origin [26]. The "Esterhazy II" specimen planted in the core collection of the HUALS Research Institute for Fruit Growing was derived directly from Fertőd. The genotype was previously considered as "Esterhazy II". Therefore, our zero hypothesis is that this genotype represents the type. Henceforth, this genotype is marked under "Esterhazy II" in this paper. "Esterhazy II" has an early budbreak time; therefore, it is sensitive to the late spring frosts. It has a protandrous flowering; there is long homogamy during its blooming period. The first female bloom date is in early May. The yield of "Esterhazy II" is good or very good in Germany (Internet 4) but poor in Hungarian climatic conditions as it is prone to apomixis. Its harvest time in late September-early October with a solitary setting type. The nuts are ovate with a medium-long tip, large (37 to 45 mm) and approx. 37 mm in width. Its shell is relatively thin, well-closed with a smooth or slightly grooved shell surface. Usually, there are 84 nuts in a kilogram of dried shelled fruits. It is easy to crack, with the kernel rate approx. 49%, and having a unique, so-called ivory white color. Kernel removal is easy and has one of the best flavors. The trees have a medium vigor with wide canopies. This genotype is susceptible to Xanthomonas arboricole pv. juglandis [19][20][21][22][23][24] (Figure 6). To check the identity of the variety, all the available specimens under the name "Esterhazy" were sampled for the genetic identification procedure. For this purpose, three additional old trees were sampled from the Esterhazy estate, Fertőd (one "Esterhazy " and two grafted "Esterhazy II" trees), and three other grafted "Esterhazy II" individuals from the HUALS core collection were added, as well. Furthermore, attempts were made to find any existing herbarium samples of the original "Esterhazy II", but unsuccessfully. Milotai 10 Milotai has an early budbreak, with budbreak around late March, early April. Female flowers of this protogynous variety open in the third decade of April; its male flowers shed the pollen in late April, early May. Nuts are ready for harvest around 20 September. "Milotai 10" produces nuts with excellent quality, having a large fruit size, smooth shell surface, light shell and kernel color [5,27] (Figure 6). Chandler Chandler is the most widespread hybrid walnut cultivar in the world. In Hungary, its budbreak is medium-early and is mostly a lateral bearer, but this is not typical for the young trees. Nuts ripen in the last week of September. The dried fruits hold high-quality with an average shell weight of 13 g, a diameter of 28-30 mm, are smooth-surfaced, and have a light shell and kernel color. In a well-pruned and irrigated orchard, 90% of the nuts could reach 32 mm in diameter. The kernel rate is 49%. The tree is moderately vigorous, partly upright in habit, but highly susceptible to Xanthomonas disease [28][29][30] (Figure 6). Description of the Trial's Site Conditions The trial (47 • 20 11 N, 18 • 51 53 E, 127 m above sea level) was planted in the spring of 1990, at the experimental field of HUALS Research Institute for Fruit Growing, on chernozem soil with high lime content (pH = 8, total lime content in the top 60 cm layer 5%), and high humus content (2.3-2.5%). Considering the Arany-type cohesion index [31], the K A = 40 refers to medium compactness. All observed trees were grafted on Juglans regia selected seedling rootstocks. Three grafted trees were planted in a block of 10 × 10 m. The orchard was not irrigated. During the data collection period, the average annual temperature was 11.5 • C, while the average annual temperature during the growing season (between March and September) was 18.4 • C. The average minimum temperature during the spring months was 5.3 • C, the number of days with frosts during spring (between March and May) was 5.4 days. The average annual precipitation was 580.6 mm, and the sunshine hours were 2079 h/year. Phenological Observations and Nut Morphology To collect data concerning budbreak and blossom, the Ctifl scheme [32,33] was used. Starting point (calendar day counted continuously from 1 January every year) for budbreak was the stage of "Cf" (when the terminal buds reached 2.5 cm length), for the stage of the female flower with the code "Ff2" (when the stigmas started to open), and for the male flowers "Em" (when the pollen started to shed). The end of the blossoms was when the male flowers dried and dropped off from the trees, while for the female flowers, the drying out of the feathers marked the end of the blossom. All data were recorded every other or every third day in the mornings between 8 and 11 am. The harvest time was marked when 50% of the husks were open. The sample of 30 nuts per variety was collected at the harvest time each year between 2010 and 2019, and the nut characteristics (nut height, nut diameter, nut width, shell thickness, dried nut weight, kernel weight) were measured. Roundness index (nut diameter/nut height), kernel rate (weight of kernel/nut weight), and cracking rate (weight of halves/whole kernel weight) were calculated. The data were collected between 2010 and 2019. Shell and kernel color was measured using a Konica Minolta chromameter CR-400 (Konica Minolta, Japan) color measure, expressing the color with the following three values: "L" value is for lightness from black (0) to white (100), "a" is from green (-) to red (+), "b" is from blue (-) to yellow (+). Measurements were made in the last three years, parallel with the nut examinations. For statistical evaluation Statgraphics, X64 software was used. The letters a, b, c indicate significantly different groups at SD 5% , while varieties being not significantly different from each other at SD 5% are indicated with the same letter. Genetic Analysis SSR fingerprinting was used to check the genetic identity of the "Esterhazy" specimens in the HUALS gene bank. Five "Esterhazy II" samples were included in the analysis indicated as E II (1), E II (2), E II (3) from the HUALS gene bank, E II (Fertőd 1) and E II (Fertőd 2) from Fertőd, supplemented with one additional "Esterhazy I" tree indicated as E I (Fertőd). Sample E II (3) represents the genotype that was the object of the phenology and morphology observations in the field trial. For SSR data analysis, GenAlEx 6.5 software [39,40] was used. The probability of identity (P ID ) was calculated for combining the analyzed eight loci by GenAlEx 6.5., possible matching genotypes were checked. The genetic distance matrix was generated by GenAlEx 6.5 for the construction of an UPGMA dendrogram by PAST 4.03 [41] with 9999 bootstrap replicates. Conclusions The old Hungarian walnut genotype labeled as "Esterhazy II" in the HUALS gene bank was examined from phenological and genetical points of view during 2010 and 2019. It can be concluded that under the name "Esterhazy II", several different genotypes can be observed. Our results confirmed that the examined "Esterhazy II" genotype shows high similarity to the "original" "Esterhazy II" described formerly, since all the checked characteristics fitted well with the literature data, with the only exception of the late budbreak. Nevertheless, only the high similarity can be confirmed since the original "Esterhazy II" is highly probable to have faded away due to the generative propagation used until the 1970s. It was also uncovered that the investigated "Esterhazy II" has a unique genotype. Therefore, it can be considered to be introduced as a new variety from the Esterhazy series. In general, the rediscovery and restoration of this historical variety would provide new candidates for walnut breeding.
2021-05-01T06:17:19.013Z
2021-04-23T00:00:00.000
{ "year": 2021, "sha1": "69bede4ef58524f60f3e53a2a46fd2dab2222ab9", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2223-7747/10/5/854/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e496b4f8c763be7834495b9f058acecf1f634691", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
270522191
pes2o/s2orc
v3-fos-license
Microbiota metabolized Bile Acids accelerate Gastroesophageal Adenocarcinoma via FXR inhibition Background: The incidence of Barrett esophagus (BE) and Gastroesophageal Adenocarcinoma (GEAC) correlates with obesity and a diet rich in fat. Bile acids (BA) support fat digestion and undergo microbial metabolization in the gut. The farnesoid X receptor (FXR) is an important modulator of the BA homeostasis. The capacity of inhibiting cancer-related processes when activated, make FXR an appealing therapeutic target. In this work, we assess the role of diet on the microbiota-BA axis and evaluate the role of FXR in disease progression. Results: Here we show that high fat diet (HFD) accelerated tumorigenesis in L2-IL1B mice (BE- and GEAC- mouse model) while increasing BA levels and enriching gut microbiota that convert primary to secondary BA. While upregulated in BE, expression of FXR was downregulated in GEAC in mice and humans. In L2-IL1B mice, FXR knockout enhanced the dysplastic phenotype and increased Lgr5 progenitor cell numbers. Treatment of murine organoids and L2-IL1B mice with the FXR agonist obeticholic acid (OCA) deacelerated GEAC progression. Conclusion: We provide a novel concept of GEAC carcinogenesis being accelerated via the diet-microbiome-metabolome axis and FXR inhibition on progenitor cells. Further, FXR activation protected with OCA ameliorated the phenotype in vitro and in vivo, suggesting that FXR agonists have potential as differentiation therapy in GEAC prevention. In-Situ-Hybridization (ISH) ISH was done to detect antigen expression on the mRNA level for Lgr5 and FXR.The RNAscope 2.5 HD assay -Detection reagent BROWN (ACD) and all related reagents from ACD were used.The procedure was performed according to the manufacturer´s protocol using the Mm-Lgr5 target probe (ACD, Cat.No. 312171), the Hs-Lgr5 target probe (ACD, Cat.No 311021) and the Mm-NR1H4 target probe (ACD, Cat.No 484491) for detection of FXR expression.Quantification was assessed as the percentage of positive cells in the BE region as for IHC. RNA extraction and reverse transcription (RT) Tissue for RNA isolation was collected and stored overnight in 250 µl RNAlater™ (AM-7020; Invitrogen) at 4°C before long-term storage at -80°C.Isolation was performed using the RNeasy Mini Kit (74104; Qiagen) according to manufacturer's instructions.For tissue homogenization a SilentCrusher M (Heidolph) was employed.RNA was eluted in 20 µl PCR-grade water.RNA concentration and quality were measured on a Nano-Drop 2000 spectrophotometer (Thermo Scientific).RNA was directly subjected to reverse transcription (RT) or stored at -80°C until further use in downstream applications.For RNA extraction from stool, the Quick-RNA Fecal/Soil Microbe Microprep Kit (R2040, Zymo Research) was performed according to manufacturer's instructions.Reverse transcription was conducted with the QuantiTect Reverse Transcription Kit (205314, Qiagen) according to manufacturer´s instructions. Quantitative Real Time PCR (qRT-PCR) Target gene expression levels were evaluated by qRT-PCR on a LightCycler® 480 (Roche).PCR reactions were performed in a total volume of 10 µl per reaction using the QuantiFast SYBR Green PCR Kit (4000) (204057, Qiagen).For each reaction 10-25 ng RNA were applied in a volume of 1-2 µl, reactions were performed in triplicates.Glyceraldehyde 3-phosphate dehydrogenase (GAPDH), Cyclophilin A and beta-Actin served as standard housekeeping genes, 16S-RNA and bacterial GAPDH were used as housekeeping genes for determination of fecal bacterial gene expression.PCR conditions for all reactions were: 95°C for 3 minutes, followed by 40 cycles of 95°C for sequences were retrieved from the existing internal primer stock, from papers, collaborators, or designed via Primer-Blast (NCBI).All primer pairs were re-evaluated for self-complementation and gene specificity via blasting on Primer-Blast (NCBI).Primer amplification efficiencies were tested by generating a standard curve from PCR reactions of serial dilutions of the cDNA of positive control tissues.Furthermore, the melting curve of the primers was checked for quality control.Primer sequences of genes of interest and of housekeeping genes for fecal bacterial gene expression are listed in Table S1. Microarray analysis Total RNA from SCJ and forestomach tissues from 12-month-old L2-IL1B WT and L2-IL1B-FXR KO mice (n=3) were extracted by TRIzol reagent (Invitrogen) according to the manufacturers protocol.Expression profiling was accomplished using Mouse gene 2.1 Affymetrix GeneChip® expression arrays.Differential expression was determined using Limma(4) as implemented in oneChannelGUI (5) operating as part of the Bioconductor Suite (6) in the R statistical computing environment (7).Estimates of the statistical significance of overlap between gene sets were performed using the chi-square test bead beater (MP biomedicals).Then, 15 mg polyvinylpyrrolidone was added, and the suspension was centrifuged at 15.000g at 4 °C for 3 min.The supernatant was collected and again centrifuged at 15.000g at 4 °C for 3 min.Thereafter, 500 µl of clear supernatant was collected, and 5 µl RNAse (10 mg/ml) were added to the samples followed by an incubation step for 20 min at 37 °C and shaking at 700 rpm.Subsequently, DNA was extracted with the NucleoSpin gDNA Clean-up kit following the manufacturer's protocol. DNA from human PAXgene biopsy samples was extracted using the PAXgene® tissue DNA-Kit (Preanalytix), following the manufacturer's protocol for purification of genomic DNA from sections of PAXgene-treated, paraffin-embedded tissue.For all extracted DNA samples, a sodium acetate precipitation was performed for purification and concentration of the samples.DNA samples were mixed 1:10 with a sodium acetate solution (3M).Then, 4 volumes of 100 % ethanol were added, mixed thoroughly, and incubated over night at -20 °C.All samples were centrifuged at top speed for 30 min at 4 °C, and the supernatant was discarded.The pelleted DNA was washed with 500 μl icecold 80 % ethanol.The samples were centrifuged at top speed for 10 min at 4 °C, and the supernatant was discarded.If needed, the washing step was repeated once.The DNA pellet was air-dried and resuspended in 10 μL nuclease-free dH2O.After precipitation, concentration and purity of all samples was measured on a Nanodrop™ 1000 Spectrophotometer (Thermo Scientific).In a test-sequencing experiment, it was shown that DNA probes had sufficient quantitiy and quality for further analyses. The V3/V4 region of 16S rRNA genes was amplified (25 cycles for fecal samples, 15 cycles for tissue biopsies) from 12 ng of metagenomic DNA using the bacteria-specific primers 341F and 785R following a two-step procedure to limit amplification bias (9). Amplicons were purified using the AMPure XP system (Beckmann), pooled in an equimolar amount, and sequenced in paired-end modus (PE275) using a MiSeq system (Illumina, Inc.) following the manufacturer's instructions. Analysis of 16S rRNA sequencing data 16S rRNA sequencing data were analyzed using IMNGS, a web-based pipeline for processing of 16SRNA amplicon datasets (10), and RHEA, an R-based pipeline for data analysis and visualization.Beginning with the IMNGS workflow, resulting sequences were remultiplexed with a Perl script provided by the inventors of IMNGS termed remultiplexor (10).The IMNGS workflow itself is based on the UPARSE pipeline (11).In the IMNGS workflow, pairing, quality filtering and clustering of zero radius operational transcriptional units (zOTUs) were performed, wherefore USEARCH 8.0 was used (12). Therefore, all reads were trimmed to the position of the first base with a quality score smaller than three and then paired.The resulting sequences were size filtered and sequences with assembled size <300 and >600 nucleotides were excluded.Paired reads with an expected error bigger than three were also excluded.Remaining sequences were trimmed by five nucleotides on each side to avoid guanine-cytosine (GC) bias and nonrandom base composition.After processing of the remultiplexed data by the IMNGS workflow, a zOTU-table with associated sequences and taxonomic information for further analysis was generated.Additional analyses, including normalization, alpha-and betadiversity, taxonomic abundance and correlation were performed using the Rhea-pipeline created for the R-interface RStudio (13)(14)(15). OTU clustering and correlation analysis of human stool samples Raw sequencing reads of the 16S-V4 region were analyzed using a previously described pipeline.Briefly, an OTU count per sample table was generated using USEARCH v11.0.667 (12) , and taxonomies were assigned using RDP classifier trained with 16S rRNA training set 18 (16).The samples with less than 3000 reads were filtered out.For each bacteria family, a Pearson correlation was calculated between its relative abundance and CA, DCA and TUDCAlevels across all samples. Mass spectrometry (MS) for targeted bile acid analysis Serum samples from the L2-IL1B and L2-IL1B-FXR KO mouse cohort, CD, HFD and HFD+OCA-treated L2-IL1B mice and healthy control individuals and patients diagnosed with BE, dysplasia and EAC from the BarrettNET study were used for metabolomic analyses.Also, cecal content and feces from CD, HFD and HFD+OCA-treated L2-IL1B mice and stool of patients were submitted for metabolomic analysis.Around 20 mg cecal and fecal/stool content or tissue were weighed in 2 ml bead beater tubes (CKMix 2 ml, Bertin Technologies, Montigny-le-Bretonneux, France) filled with ceramic beads (1.4 mm and 2.8 mm ceramic beads i.d.).Samples were later normalized to input weight. For mouse samples, 1 ml methanol-based dehydrocholic acid extraction solvent (c=1.3 µmol/L) as an internal standard for work-up losses was added.For human samples, 100 mg of stool was extracted with 5 ml methanol-based dehydrocholic acid.Samples were homogenized using a bead beater (Precellys Evolution, Bertin Technologies) supplied with a Cryolys cooling module (Bertin Technologies, cooled with liquid nitrogen;3x20 seconds at 10.000rpm, 15 seconds breaks).The suspension was centrifuged (10 min, 8000 rpm, 10 °C) using an Eppendorf Centrifuge 5415R (Eppendorf, Hamburg, Germany).For metabolomic analysis of serum samples, 30 µL of serum was diluted with 270 µL methanolic dehydrocholic acid solution.After centrifugation as described above, the supernatant was used for analysis. Targeted metabolomic analysis of bile acids was performed according to a method published by Reiter et al. (22).Briefly, 20 µl of isotopically labeled bile acids (ca.Matrigel ® per well were plated in a 24-well plate prewarmed to 37°C and overlaid with 500 μl of complete growth media per well after solidification.The plate was incubated at 37 °C.Media was changed every 2-3 days, and organoids were passaged every 7-10 days depending on the growth rate.Media was removed, and Matrigel ® disrupted by repetitive pipetting with ice-cold dPBS + 10 % FBS buffer, and the suspension from each well was pooled into a 15 ml collection tube.The washing step was repeated.Matrigel ® and organoids were gently disrupted by repetitive pipetting of the suspension in the 15 ml collection tube.The tube was then centrifuged at 4°C for 10 min.at 400g.Afterwards, the supernatant and Matrigel ® debris were carefully removed.The cell pellet was resuspended in fresh Matrigel ® , plated into a prewarmed 24 well plate and overlaid with media as previously described.Wells were expanded 1:2-1:4 per passage depending on the growth rate of the organoids. Isolation and maintenance of human organoids Organoids were isolated from biopsies taken from BE-and GEAC-patients.Prior to biopsy collection, explicit informed consent was obtained from all patients in accordance with institutional guidelines and ethical standards (FREEZE-BiObank). Treatment of organoids with bile acids Mouse cardia organoids were isolated from L2-IL1B mice.Treatment was started two days after the 3 rd -4 th passage of the organoids.The organoid treatment was performed for 72 h with OCA, DCA or TβMCA or OCA + DCA or OCA + TβMCA in concentrations of 10 and 100 μM diluted in the medium, respectively.Every 24 hrs, cell numbers were counted and cultures photographed, and medium was changed subsequently.After 72h, 2-3 wells with organoids were pooled, and RNA was extracted using the RNeasy Micro Kit (Qiagen, Germany), and conditioned media from every treatment condition was collected and stored.The number of organoids counted every 24h was evaluated as percentage of organoid numbers relative to 0h. Click iT EdU flow cytometry assay kit for evaluation of cell proliferation of organoids Cell proliferation rates in treated organoids were evaluated using the Click-iT™ EdU Alexa Fluor™ 488 Flow Cytometry Assay Kit (C10425, Invitrogen™ via ThermoFisher). Reagent were prepared and cells labeled, fixed and permeabilized according to the manufacturer´s protocol with minor modifications.Organoid cells from approximately 6 wells per treatment group were treated with 50 μM EdU in fresh conditioned medium for two hours in an incubator at 37 °C.After two hours, organoids were harvested with 500 μl of ice-cold dPBS + 10 % FBS per well, pooled and centrifuged for 10 min at 4 °C and 400 rcf.The cell pellet was then incubated in 1 ml of 0.25 % Trypsin-EDTA (25200056, Gibco™ via ThermoFisher) for 15 min on a rotation wheel in an incubator at 37 °C with occasional vortexing to obtain single cells.The reaction was stopped with dPBS + 10 % FBS, the cells were again centrifuged, and the supernatant was discarded.After single cell isolation, cells were washed with 3 ml of 1% BSA (9418, Sigma) in dPBS and centrifuged as described before.The supernatant was discarded, and the pellet was thoroughly resuspended in 100 μl of Click-iT® fixative.Cells were incubated for 15 min at room temperature (RT) on a rotation wheel in the dark.Cells were washed with 3 ml of 1% BSA in dPBS and pelleted by centrifugation as described before.The cell pellet was resuspended in 100 μl of 1X Click-iT® saponin-based permeabilization and wash reagent.The Click-iT® EdU reaction cocktail was prepared according to the manufacturer´s protocol.Then, 0.5 ml of freshly prepared reaction cocktail were added to the organoid cells in 1X Click-iT® saponin-based permeabilization and wash reagent, and samples were incubated for 30 min.at RT on a rotation wheel in the dark Cells were again washed with 3 ml of 1% BSA in dPBS and pelleted by centrifugation as above. The supernatant was removed, and the cells were resuspended in 100 μl of 1X Click-iT® saponin-based permeabilization and wash reagent.Cells were stained for DNA content and analyzed on a Gallios flow cytometer (Beckman Coulter), using an excitation laser at 488nm, and measuring emission using a green bandpass emission filter (530/30 nm).Results were analyzed using FlowJo software (TreeStar). Histological analysis of organoids For fixation and embedding of organoids for FFPE sections, organoids were grown on cover slips placed into a 24-well plate.Organoids were washed three times with 500 μl of PBS + (dPBS + 0.9mM CaCl2 and 0.493 mM of MgCl2) per well for one minute before fixation in 500 μl of 4 % PFA for 30 min at RT on a shaker.After fixation, washing steps were repeated.Organoids were transferred to Bio-Net histology cassettes (09-0403, Langenbrinck GmbH), dehydrated, paraffin-embedded and cut into sections using a manual microtome.Staining and IHC were performed as for tissue sections. Quantification was assessed as the number of positive cells compared to all cells per organoid.All organoids from one mouse, which had been subjected with the same treatment were counted as technical replicates.Organoids from different mice were defined as biological replicates.Organoids from n=3 mice per condition were used for statistical analysis.To this end, ordinary one-way ANOVA with Tukey test to correct for multiple comparisons was performed. Immunofluorescence Staining of Paraffin-Embedded Organoid Slides Human organoids samples (3 GEAC, 3BE) were fixed after 2-4 passages using 4% paraformaldehyde solution (PFA) in phosphate-buffered saline (PBS) with a pH of 7.4 (sc-281692, ChemCruz via Santa Cruz Biotechnology) for 30 minutes at the room temperature.Fixed organoids were dehydrated using graded ethanol series and xylene, and were finally embedded in paraffin blocks using a standard tissue processing protocol. FFPE organoid blocks were cut into 3.0-μm-thick sections using a microtome.Sections were mounted onto positively charged glass slides and dried at room temperature overnight.Slides were immersed in xylene for 15 minutes (2 times) to remove paraffin. Sections were rehydrated through a series of graded ethanol solutions (100%, 95%, and 70%).Slides were subjected to heat-mediated antigen retrieval using a target retrieval solution, pH 9.0 (S2367, Dako) in a pressure cooker for 15 minutes.After cooling, slides were washed with 1x wash buffer (S3006, Dako).Then slides were blocked with 10% goat serum (ENG9010-10, BIOZOL Diagnostica Vertrieb GmbH), 1: 100 Fc block anti CD16/CD32 (553142, BD Biosciences) in antibody diluent with background reducing components (S3022, Dako) for 1 hour at room temperature to minimize nonspecific binding.Next, sections were incubated with the primary antibody to FXR (sc-13063, Santa Cruz Biotechnology) in antibody diluent (1:25) with background reducing compounds overnight at 4°C in a humidified chamber.Slides were washed three times for 5 minutes each with 1x wash buffer to remove unbound primary antibodies. Fluorescent-labeled secondary antibodies 1:400 (A11034, Invitrogen) were applied to the sections for 1 hour at room temperature, then slides were washed 3 times more.To minimize autofluorescence of samples, we used Quenching Kit (SP-8500, Vector Laboratories Inc.).As a final step of this kit, slides were mounted using an anti-fade medium with DAPI to preserve fluorescence signals and counterstain nuclei. RNA extraction and downstream applications In brief, 3-6 wells of organoids were harvested with 500 μl of dPBS + 10 % FBS per well, cells were pooled and centrifuged for 10 min at 4 °C and 400 rcf.The supernatant was removed, and the pellet was resuspended in 500 µl collagenase/ dispase (1 mg/ml) and incubated at 37 °C for 1 hour to digest Matrigel® residuals enzymatically.The reaction was stopped with 500 µl 5 mM EDTA (AM9260G, Invitrogen™ via ThermoFisher) in dPBS.The tube was filled 10 ml with dPBS, centrifuged for 10 min at 4 °C and 400 rcf, and the supernatant was removed.The remaining cell pellet was lysed in 600 µl RLTbuffer (Lysis buffer; 1015762; Qiagen) supplemented with 1 % beta-mercaptoethanol (4227.3,Roth).RNA isolation was performed using the RNeasy Mini Kit (74104; Qiagen) according to the manufacturer´s protocol.Reverse transcription PCR and qRT-PCR were both conducted as described for tissue samples. DNA damage ELISA OxiSelect Oxidative DNA Damage ELISA, 8´OhdG Quantitation (Cell Biolabs, Inc. STA-320) was performed according to the manufacturer's protocol using conditioned media from the different treatments done in the organoid cultures.Conditioned media from 12mold L2-IL1b mouse organoids treated with OCA, DCA, TβMCA, OCA+DCA or OCA+TβMCA and without treatment were analyzed after 48h and 72h treatment, respectively.Data was acquired on a multiskan FC microplate reader (Thermo Scientific) and analyzed using Microsoft Excel and GraphPad Prism version 8. 7 µM each) were added to 100 µl of sample extract.Targeted bile acid measurement was done using a QTRAP 5500 triple quadrupole mass spectrometer (Sciex, Darmstadt, Germany) coupled to an ExionLC AD (Sciex, Darmstadt, Germany) ultrahigh performance liquid chromatography system.A multiple reaction monitoring (MRM) method was used for the detection and quantification of the bile acids.An electrospray ion voltage of −4500 V and the following ion source parameters were applied: curtain gas (35 psi), temperature (450 °C), gas 1 (55 psi), gas 2 (65 psi), and entrance potential (−10 V).The MS parameters and LC conditions were optimized using commercially available standards of endogenous bile acids and deuterated bile acids, for the simultaneous quantification of selected 34 analytes.For separation of the analytes a 100 × 2.1 mm, 100 Å, 1.7 μm, Kinetex C18 column (Phenomenex, Aschaffenburg, Germany) was used.Chromatographic separation was performed with a constant flow rate of 0.4 ml/min using a mobile phase consisted of water (eluent A) and acetonitrile/water (95/5, v/v, eluent B), both containing 5 mM ammonium acetate and 0.1% formic acid.The gradient elution started with 25% B for 2 min, increased at 3.5 min to 27% B, in 2 min to 35% B, which was hold until 10 min, increased in 1 min to 43% B, held for 1 min, increased in 2 min to 58% B, held 3 min isocratically at 58% B. Then the concentration was increased to 65% at 17.5 min, with another increase to 80% B at 18 min, following an increase at 19 min to 100% B which was hold for 1 min.,At 20.5 min, the column was equilibrated for 4.5 min at starting.The injection volume for all samples was 1 μL, the column oven temperature was set to 40 °C, and the auto-sampler was kept at 15 °C.Data acquisition and instrumental control were performed with Analyst 1.7 software (Sciex, Darmstadt, Germany).Targeted metabolomic analyses were analyzed as BA intensities, or respectively, if correlated to input weight and dilution, in nmol/g feces.Targeted Serum BA analysis comparing CD with HFD mice were visualized as heatmaps showing primary and secondary BA intensities between samples, as well as a correlation heatmap of BA levels with dysplasia scores and inversely with goblet cell ratios, representing the differentiation status.Bile acids detected via targeted BA analysis are listed in Table2.Table S2: Bile Acids measured in targeted BA metabolomic analysis 5β-Cholic acid-3α-ol-7-one, 7-Ketolithocholic acid, (7-KLCA) 3-Dehydrocholic acid (3-DHCA) 5β-Cholic acid-3α-ol-12-one, 12-Ketolithocholic acid, (12-KLCA) 5β-Cholic acid-3α-ol-6,7-dione, acid-7α-ol-3-one, (Ca-7ol3one) Deoxycholic acid, (DCA) Glycocholic acid, (GCA) Glycochenodeoxycholic acid, (GCDCA) Glycoursodeoxycholic acid, (GUDCA) Glycodeoxycholic acid, (GDCA) Isolithocholic acid, (ILCA) Hyodeoxycholic acid, (HDCA) Taurocholic acid, (TCA) Lithocholic acid, (LCA) Taurohyodeoxycholic acid, (THDCA) Tauro-α-Muricholic acid, (T-α-MCA) Taurolithocholic acid, (TLCA) Taurochenodeoxycholic acid, (TCDCA) Tauro-ω-Muricholic acid, (T-ω-MCA) Taurodeoxycholic acid, (TDCA) Allocholic acid, (ACA) Tauroursodeoxycholic acid, (TUDCA) Ursocholic acid, (UCA) Ursodeoxycholic acid, (UDCA) 5β-Cholic acid-3α-ol-7,12-dione, (7,12acid / Hyocholic acid, (γMCA) Analysis and statistical evaluation of targeted metabolomic analyses For single BA analyses, values of the row data were first multiplied by 1000 and then log2 transformed.BA with N/A values were not considered.The data of all other BA was transferred to GraphPad for statistical analysis (details are listed below in the statistical part).For further analysis and visualization, including PLS-DA (Partial Least Squares -Discriminant Analysis), and clustered heatmap, the web-based tool metaboanalyst was employed (23).Organoid culture, maintenance and experiments Preparation of conditioned medium for organoid maintenance L-WRN (LWnt3A, R-spondin 3 and Noggin) conditioned medium with additional growth factors was used as growth medium for organoid culture.Media were conditioned wit L-WRN-producing cells (ATCC® CRL3276™) according to the manufacturer´s protocol.The complete growth medium consisted of advanced DMEM/F12, supplemented with 10 % FBS, 1 % Penicillin/Streptomycin, HEPES buffer (pH 7.8) and Glutamax (all Gibco™ via ThermoFisher).For selection, G-418 and Hygromycin B (all Gibco™ via ThermoFisher) were used.For organoid culture, L-WRN conditioned media was supplemented with additional growth factors, including 1xB27 and N2 (17504044, 17502048, ThermoFisher), 50 ng/ml human EGF (AF-100-15, Peprotech) and 1mM Nacetyl cysteine (A7250, Sigma Aldrich).Isolation and maintenance of murine organoids Mouse organoid culture was performed according to the procedure published by Pastula et al. 2016 with minor adjustments(24).Resected and cleaned cardia tissue was cut into small pieces using surgical scissors and transferred to an Eppendorf tube containing 200-300 μl Accutase ® cell detachment solution (A6964, Sigma-Aldrich).The tissue was incubated in Accutase ® on a shaker for 15 min for enzymatic tissue digestion.Tissue pieces were transferred to a 50 ml collection tube containing 20 ml of ice-cold dPBS (14190144, Gibco™ via ThermoFisher) supplemented with 2 mM EDTA (AM9260G, Invitrogen™ via ThermoFisher) and EGTA (3054.2,Roth) each.All following steps were performed on ice.The tissue was incubated on a shaker on ice for 45 min for chemical tissue digestion.Supernatant was removed from sedimented tissue pieces, and tissue was washed and mechanically disintegrated by pipetting up and down in 10 ml of cold dPBS (14190144, Gibco™ via ThermoFisher) + 10 % FBS (10500064, Gibco™ via ThermoFisher).The tissue suspension was passed through a 70 μm cell strainer into a 50 ml tube.The tissue was removed from the strainer by washing with 10 ml of fresh dPBS + 10 % FBS.Mechanical disintegration, filtration, and recovery of the tissue from the strainer as described was repeated four times.The final volume of cell suspension was centrifuged at 4°C for 10 min at 400g.The supernatant was removed, and the cells were resuspended in 150-300 μl Matrigel ® (354230, Corning ® ).Then, 50 μl ice-cold Per one sample 5 different fields of view (20x magnification) containing organoids were taken.Images were analyzed with CellProfiler 4.2.5*.The nuclear signal of FXR was measured in the regions of interests defined by the DAPI signal.Nuclei were classified as positive if their mean FXR intensity exceeded that of 75% of all nuclei.The result was expressed as the percentage of positive nuclei relative to the total number of nuclei in field of view. Table S1 : Primers for quantitative Real Time -PCR
2024-06-17T13:10:20.425Z
2024-06-12T00:00:00.000
{ "year": 2024, "sha1": "4a8675c0fe29d8f9c0b81f7b4c6eabae173dd3b4", "oa_license": "CCBYNCND", "oa_url": "https://www.biorxiv.org/content/biorxiv/early/2024/06/12/2024.06.11.598405.full.pdf", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "4edb1364a89773b5a7f818d033f1d696ed5e1e68", "s2fieldsofstudy": [ "Medicine", "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Biology" ] }
3791610
pes2o/s2orc
v3-fos-license
Recruit Fitness as a Predictor of Police Academy Graduation Abstract Background Suboptimal recruit fitness may be a risk factor for poor performance, injury, illness, and lost time during police academy training. Aims To assess the probability of successful completion and graduation from a police academy as a function of recruits’ baseline fitness levels at the time of academy entry. Methods Retrospective study where all available records from recruit training courses held (2006–2012) at all Massachusetts municipal police academies were reviewed and analysed. Entry fitness levels were quantified from the following measures, as recorded at the start of each training class: body composition, push-ups, sit-ups, sit-and-reach, and 1.5-mile run-time. The primary outcome of interest was the odds of not successfully graduating from an academy. We used generalized linear mixed models in order to fit logistic regression models with random intercepts for assessing the probability of not graduating, based on entry-level fitness. The primary analyses were restricted to recruits with complete entry-level fitness data. Results The fitness measures most strongly associated with academy failure were lesser number of push-ups completed (odds ratio [OR] = 5.2, 95% confidence interval [CI] 2.3–11.7, for 20 versus 41–60 push-ups) and slower run times (OR = 3.8, 95% CI 1.8–7.8, [1.5 mile run time of ≥15′20″] versus [12′33″ to 10′37″]). Conclusions Baseline pushups and 1.5-mile run-time showed the best ability to predict successful academy graduation, especially when considered together. Future research should include prospective validation of entry-level fitness as a predictor of subsequent police academy success. Introduction Policing is a dangerous occupation that is both physically and psychologically demanding [1][2][3][4][5][6]. Stressors in law enforcement include shift work; frequent overtime; high job demands in the context of low decisional control; and frequent confrontational interactions [7,8]. Additionally, specific duties such as suspect pursuits and physical altercations with suspects or detainees may require sudden high levels of physical exertion [3,9,10]. We recently documented that sudden cardiac deaths (SCD) account for up to 10% of all on-duty deaths during police activities and that SCD events are much more likely to occur during stressful duties, especially physical altercations with and pursuits of suspects [8]. Therefore, there are many reasons that police officers and candidate recruits joining the law enforcement profession should be fit. The state of Massachusetts mandates all recruit police officers pass a state-regulated medical examination and then a job-specific physical ability test (PAT), prior to potential entrance into one of the Commonwealth's police academies [11]. The PAT was designed to replicate certain functions and capabilities of police work in an effort to test a participant's ability to safely perform essential police duties. However, based on concerns of inclusiveness and non-discrimination, the medical exam does not have an obesity or body composition standard. Likewise, the PAT requires only modest levels of fitness, because it is designed to assess minimum capabilities, rather than select the most competitive candidates. Within the current obesity epidemic, we previously found that as many as a third of public safety candidates in Massachusetts were obese [12]. The present study was initiated by the Massachusetts Municipal Police Training Committee (MMPTC). The MMPTC leadership's anecdotal experience is that many of their student recruits are ill-prepared for the rigours of 20 weeks of police academy training. Such recruits seem particularly likely to drop out of academies early on or otherwise fail to complete their training. There are strong financial incentives to minimize the number of unsuccessful candidates. Each recruit's training costs the Commonwealth of Massachusetts $5000 on average. In addition, the hiring jurisdiction loses tens of thousands of dollars invested during the hiring process. Therefore, if fitness predicted academy success, target physical fitness standards are desirable and may inform potential police recruits as to how to better prepare themselves for a training academy. Therefore, our study tested the hypothesis that lower measured physical fitness increased the odds of failing or not completing the police academy. Methods The retrospective cohort consisted of all recruits (all aged 18 years and older) who enrolled in a police recruit training course at any of the 10 municipal police academies throughout Massachusetts during the period of 2006-2012. All records from these recruit training courses were abstracted on MMPTC premises into an electronic database without personal identifiers. The study protocol was approved and individual consent was waived by the institutional review boards of the Cambridge Health Alliance and the Harvard T.H. Chan School of Public Health. The Cooper assessment [13] is performed during the recruits' first week at the academy as a baseline set of standardized measures. It includes height, weight, body mass index (BMI), body fat per cent (caliper measurements), push-ups (number performed in one minute), sit-ups (number performed in 1 minute), sit and reach (measure of forward reach while sitting flat on the floor with the legs flat and outstretched) and a timed 1.5 mile run (recorded in minutes and seconds) [13]. In addition, VO 2 max can be estimated from the 1.5 mile run-time, using standardized conversion charts [13,14]. In Massachusetts, police recruit training academies for full-time, entry-level municipal, University of Massachusetts, and environmental police officers consist of a 20-plus week basic training course. The programme combines 'classroom instruction, practical exercises, and scenarios designed to provide the knowledge, skills, and abilities needed to excel in the police profession and be an asset to the community' [15]. Furthermore, each recruit is expected to participate in all physical fitness training sessions during the academy. 'Full' participation includes completing runs of increasing lengths (1.5-5 miles, maximum) during the academy, at a minimum pace of 11 minutes per mile. Recruits are subject to dismissal from the academy if they fail to participate fully in more than 30% of the fitness training sessions. However, the baseline Cooper fitness assessment is not graded; no 'pass' or 'fail' judgment is given, and no credit towards graduation (positive or negative) is given. Furthermore, it is not counted as a training session and therefore, its completion is not included in the 30% fitness participation rule [16]. Successful graduation is determined by overall academy performance, including attendance, any disciplinary actions, participation in fitness training, classroom activities and written test scores as well as other practical exercises and practical test scores. According to the MMPTC director's estimates, 70% of failures to graduate are due to recruits who drop out of the academies (usually because they are ill-prepared); another 20% do not meet the above-mentioned fitness participation criteria and roughly 5% fail due to poor academic performance and 5% for other reasons. However, individual reasons for academy failure were not maintained by the MMPTC and not available for the current study. Therefore, academy failure was the primary outcome of our study, and failure was defined as not successfully graduating from the police academy for any reason. The following data were extracted for each recruit: academy location, training start date, age, gender, entrylevel Cooper fitness characteristics (see above), and academy performance/outcome (graduation/failure). Independently, one researcher extracted demographic and fitness data blind to the outcome data, while another researcher collected outcome data blind to the recruit's fitness data. The blinded collection of the independent variables from the dependent outcome variable minimized potential information bias. We restricted the primary analyses to candidates with complete baseline Cooper physical fitness data (pushups, 1.5 mile run-time), gender, and graduation outcome (excluding missing data on a case-by-case basis). We also analysed all recruits in a model where we assumed those with a missing baseline fitness parameter were unable or unwilling to complete the baseline fitness assessment. We used generalized linear mixed models in order to fit logistic regression models, with random intercepts for academy, assessing the probability of not graduating. The parameterization of continuous covariates was chosen by first applying fractional polynomials and then selecting the parameterization that minimized the deviance of the model. We did not find evidence supporting a non-linear association between the outcome and the studied metrics of physical performance. Hence, all variables were treated as linear. We assessed the presence of interactions between gender and fitness parameters by including interaction terms in the model and evaluating their significance with the use of the likelihood ratio tests. We performed statistical analyses using Stata 12.1 SE (Stata Corp, College Station, TX) and SAS 9.4 (SAS Inc., Cary, NC). We defined as statistically significant a two-sided P value of <0.05. Results During the study period, data were available for 2993 recruits, and the overall academy graduation rate was 90%. Gender information was missing for 25 recruits, and these were excluded from the main analysis. Of the remaining 2968 records, 13% of women (37 of 287) and 9% of males (239 of 2681) had incomplete information and were also excluded from the main analysis. Among recruits with complete exposure information (n = 2692), the academy failure rate (not graduating) was only 5% (versus 10% for the entire cohort (Table 1)) and compared with a 55% failure rate among recruits with incomplete baseline fitness data. Therefore, the 301 recruits with missing information accounted for a disproportionately large number of academy failures (166 of the total 286) (ST1). In other words, only 45% of the recruits with missing baseline fitness data graduated, and those with missing baseline data accounted for 58% of all those not graduating during the study period (ST2). The baseline characteristics of Massachusetts police recruits during the study period are presented in Table 1 by graduation status. Table 1 is limited to recruits with complete information (gender, push-ups, 1.5 mile runtime and graduation status (n = 2692)). The first section of the table shows that graduation rates varied significantly across different individual academies; Academy 2 had the lowest graduation rate (84%) and Academy 7 had the highest (100%). Supplementary Digital Content Table 3 summarizes the gender distributions and entry-level fitness characteristics for each individual academy (ST3). During the study period, the recruit population was 91% male. Graduation rates for all recruits, including those with missing baseline fitness data, were significantly higher for men (91%) when compared to women (87%) (P < 0.05). When only recruits with complete baseline fitness data were considered, the rates were not statistically different for men (96%) and women (95%). Among the smaller number of candidates with incomplete baseline fitness data, graduation rates were higher for men (47%) than women (35%) although the difference was not statistically significant. Successful graduates had a slightly but significantly younger age distribution. Also, on average, at academy entry, successful graduates weighed less; had less body fat; performed more push-ups and sit-ups and had faster 1.5 mile run-times (all P < 0.01, except body weight (P < 0.05)). The distributions of selected study variables in the entire population are presented in Supplementary Digital Content (women) and Supplementary Digital Content Figure 2 (men) by graduation status (SF1, SF2) The results of the generalized linear mixed models for the probability of academy failure (not graduating) and using Cooper fitness components as categorical variables are presented in Table 2. The reference categories for the fitness variables are generally those that represented the largest proportion of successful male recruits (SF2). Push-ups and VO 2 max (derived from run-time) were significant predictors of successful graduation in all models, including the logistic regression models using Cooper fitness components as continuous variables (data not shown). In Table 3, we summarize the probability of academy failure (not graduating) expressed as a per cent chance of failure stratified by gender and the other statistically significant variables: number of push-ups and VO 2 max from the previous regression models. For both genders, as push-up capacity and VO2 max increased, the probability of successful graduation increased. In Table 4, we present results for the same matrices using the entire population of recruits and assuming that those with missing push-up or run data were unwilling or unable to complete the assessment and therefore are placed in the lowest performance category for that respective fitness component. In this model, we see the same pattern of results but with much higher failure rates for those in the lowest fitness categories: 29% of women and over 50% of men. Discussion This retrospective study of police academy graduation outcomes as a function of recruits' entry-level (baseline) fitness demonstrated that push-ups and 1.5 mile run-time were the Cooper fitness assessment parameters most strongly associated with academy graduation. Both measures are inexpensive and do not require any special training or equipment to assess. Furthermore, pairing the results for push-ups and run-time by gender provided a simple visual matrix for the predicting the probability of successful graduation from a police academy. Female recruits on average could do less push-ups and ran slower, yet their graduation rates were nearly identical, if they had completed the fitness assessment. Higher physical fitness may be a marker for greater motivation and preparation for the academy, which may explain in large part the observed associations. As a group, the female recruits are likely to be in better physical condition than the general female population of the same age, whereas the male recruits' average physical fitness is probably similar to that of the age-matched general male population. For example, we observed that over 60% of the women candidates had a normal or healthy body mass index, while the majority of the male police recruits were overweight or obese, consistent with our earlier study of fire and ambulance recruits in Massachusetts [12]. While the statistically significant relationships between entry fitness and graduation success were clearly evident among the majority of recruits with full information regarding fitness, the most striking finding related to recruits whose entry physical fitness assessment data were missing. The 301 recruits (10% of the study population) with missing baseline fitness information accounted for 58% (166 of 286) of those who failed to successfully graduate during the study period. Only 45% of the recruits with missing fitness data graduated compared to 95% of the recruits with complete fitness data. Based on discussions with academy leadership, most often, recruit candidates fail to graduate because they drop out or because they do not meet the minimum fitness training participation criteria. Recruits with missing fitness data were likely to represent candidates who were unable or unwilling to perform the initial Cooper fitness testing and thus prone to dropping out. Furthermore, although we cannot quantify an estimate, we learned that some academies had discarded the entire records of such candidates who dropped out early. Therefore, the present results are likely conservative estimates. To the best of our knowledge, this is the first study of its kind to examine the relationship between police recruit physical fitness and successful academy graduation. Nonetheless, our findings are indirectly supported by previous research. A positive correlation between VO 2 max and push-ups has been observed among police recruits [17]. In addition, decreased functional movement capability was found to correlate with a higher risk of injury and illness among police academy trainees [18]. Another study found that individual differences among Table 2. Logistic regression modelling the probability of not graduating with random intercepts for Academy, using Generalized Linear Mixed Models (GLIMMIX) analysis -push-ups, sit-ups, sit-and-reach, and VO2 max are used as categorical variables police recruits, especially in dispositional resilience and preference and tolerance of highly intensive exercises, affected their level of endurance, muscular strength and overall fitness [19]. Law enforcement officers are commonly exposed to high levels of occupational stress, so dispositional resilience may be an important factor in determining long-term health of police officers [6,20]. State and town sponsorship of recruit officers represent major financial investments. Therefore, maximizing the likelihood of successful completion of police academy training is in the interest of multiple stakeholders, including the sponsors, police recruits, and the tax-paying public. Based on the current results and the distribution of baseline fitness attributes among the recruits studied, we have proposed two sets of recommended cut-off points to be validated in a separate, prospective study of Massachusetts police academy recruits. The first is a recommended 'Minimum' Entry Fitness Criteria of >10 push-ups and a 1.5 mile run of <15′20″ for women applicants, and >20 push-ups and a 1.5 mile run time of <15′20″ for male applicants. Based on the current study, otherwise qualified applicants meeting the minimum entry criteria should have more than a 95% likelihood of graduating from the academy. We further suggested 'Target' Entry Fitness Criteria of >20 push-ups and a 1.5 mile run time of <14′ for females and >40 push-ups and 1.5 mile run time of <12′30″ for males. Based on the current study, qualified applicants meeting the target entry criteria should have about a 98% likelihood of graduating from the academy. Establishing and validating evidence-based fitness standards through additional prospective study should give future recruits actionable information to better prepare for police academy training and improve their likelihood of successful graduation. Rather than present a barrier for applicants who are below fitness standards, discrete minimum fitness goals could empower candidates to achieve a level of physical fitness most associated with police academy success. The present study has several strengths. It is large and spans several thousand recruits in multiple academies over a 7-year period. Moreover, our results are most likely to be conservative estimates of the strength of the relationship between increasing fitness and an increasing likelihood of graduation. As elaborated above, recruits with missing data were likely to have been unable or unwilling to complete the fitness assessment, and some may have quit the academy at that point or shortly thereafter. This would explain the disproportionate failure rate of 55% or 10-fold that of recruits with complete baseline fitness data. Finally, another strength was that fitness and outcome data were extracted in a blinded fashion, which minimized the chances of information bias. This study also has some limitations. First, the study cannot demonstrate causality between entry fitness levels and police academy graduation. The association between fitness and graduation outcomes may be determined by other associated factors such as attitude, motivation, and overall preparation. Furthermore, our study cannot reach any conclusions regarding recruit fitness and subsequent performance as a police officer. These limitations, however, do not alter the utility of using fitness as a predictor of academy success. Second, our analyses were limited by the retrospective design and missing data regarding baseline fitness for some recruits. We also cannot quantify the number of police officer recruits who left the academy within the first several days because of negative experiences with fitness testing, as those records were not consistently maintained. However, this limitation does not change our results but leads us to conclude that our findings are likely to underestimate the true association between Cooper fitness variables and academy graduation. On the other hand, due to the lack of information on specific individual reasons for academy failure, we are unable to confirm the likely associations between poor baseline fitness and a greater risk of dropping out early from academies and with failure to meet the minimum physical fitness participation standards. In conclusion, our findings strongly support that certain academy-entry fitness characteristics are strongly associated with the likelihood of recruits' subsequent graduation from Massachusetts police academies. Based on the present findings, the MPTC has commissioned a prospective cohort Table 4. Percentage (%) of candidates not graduating according to gender, number of push-ups and 1.5 mile run time on the GLIMMIX model with sex, push-ups categories and VO2 max categories included (run times expressed in minutes(′) and seconds(″)) (with gender set to male if missing and push-ups and VO2 categories set to the lowest categories if missing) Key points • Our findings support an association between certain academy entry fitness characteristics and the likelihood of successfully graduating from a Massachusetts police academy. • The 1.5 mile run time and push-ups were the fitness components most strongly associated with graduation and with each other. • Pending prospective validation, these two components are simple, low-cost initial assessments that police academies could use before admitting recruits into the academy to predict a candidate's likelihood of successfully graduating. Funding Massachusetts Municipal Police Training Committee (MMPTC) and the National Institute for Occupational Safety and Health (NIOSH) [2 T42 OH008416-09] The contents are solely the responsibility of the authors and do not necessarily represent the official views of the MMPTC or NIOSH.
2018-04-03T02:43:05.738Z
2017-09-02T00:00:00.000
{ "year": 2017, "sha1": "a2deb9604ef4ed706ad3f3477ac54f40676a1354", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/occmed/article-pdf/67/7/555/26185822/kqx127.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "a2deb9604ef4ed706ad3f3477ac54f40676a1354", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
249386887
pes2o/s2orc
v3-fos-license
A Case of Coexisting Depression and Hoarding Disorder A hoarding disorder manifests as difficulty in discarding or letting go of items irrespective of their actual worth and persistent acquisition of items. Increasing numbers of possessions clutter active living spaces to the point where their intended use is no longer possible, leading to significant functional impairment. In the Diagnostic and Statistical Manual of Mental Disorders (DSM-V), it is classified under the category obsessive-compulsive and related disorders. First considered to be a part of obsessive-compulsive disorder, compulsive hoarding was subsequently classified as an additional dimension of the obsessive-compulsive disorder spectrum. Hoarding disorder had been largely ignored clinically until recently, despite negative consequences on individuals, families, and communities. Comorbid conditions like anxiety, depression, obsessive-compulsive disorder, post-traumatic stress disorder, and panic disorder are well known to accompany hoarding disorder. This study illustrates the case of a 35-year-old married man who was referred to a psychiatrist by his primary care physician for collecting many different objects of little or no importance. These objects were lying unorganized throughout his house and cluttering most of his living space. Some of these things were discarded by his wife, which, according to him, contributed to his emotional distress. The pattern of behavior began about 10 years earlier, and it became increasingly problematic with time. Although hoarding disorder is often underreported, it is vital to diagnose this condition as it significantly affects the individual and their family and friends. Severe hoarding can pose a number of health and safety risks, including fire hazards, tripping hazards, and health code violations. Introduction In the Diagnostic and Statistical Manual of Mental Disorders (DSM-V), the diagnosis of hoarding disorder (HD) was added as an independent diagnosis, replacing its inclusion in the DSM-IV as an obsessivecompulsive disorder. The hoarding condition may lead to impairment in such basic functions as eating, sleeping, and grooming [1]. HD is characterized by a persistent inability to discard items, the desire to save items instead of discarding them, significant accumulations of possessions that clutter living spaces, and significant dysfunctional patterns. The items are typically considered to be useful in the future, aesthetically impressive, or have an emotional connection [1]. The prevalence of hoarding disorder in the general population is estimated at 2.6%, with higher rates among people over 60 years of age and individuals with other psychiatric diagnoses [2]. Compared to those in the general population, individuals with hoarding disorder tend to be older, unemployed, and unmarried, separated, or divorced more often [3]. A majority of hoarders do not consider their behavior problematic. Typically, hoarding patients accumulate items passively rather than intentionally, which leads to a gradual accumulation of clutter over time. Those with HD may experience distress similar to those with obsessive-compulsive disorder (OCD) under similar circumstances when their possessions are touched or moved without their permission. The two conditions, however, differ significantly. OCD is characterized by obsessive or intrusive thoughts which are repetitive and unwanted, while in HD, thoughts regarding the keeping or acquisition of objects are not considered intrusive or unwanted. While intrusive thoughts are distressing in OCD, distress in HD results more from the consequences of thoughts or behaviors than from the thoughts or behaviors themselves [4]. As HD progresses through each decade of life, symptoms deteriorate, and the level of distress would increase if the family or authorities intervened. In many cases, HD occurs in conjunction with other psychiatric disorders. Major depression is the most common comorbidity, occurring in as many as 50% of cases [5]. The anterior cingulate cortex (ACC) and related areas of the brain appear to be particularly associated with hoarding. The ACC in the dorsal hemisphere is involved with decision-making and reward-based learning, while the ACC in the ventral hemisphere is associated with emotional and motivational experiences. Observations of functional magnetic resonance imaging studies have shown that ACC activation is lower in hoarding individuals compared to control individuals [6,7]. Usually, hoarding disorder is diagnosed based on a direct interview with the person to determine whether or not the characteristics of the disorder exist. It is crucial to recognize that hoarding is not always the primary reason for consultation, as in this instance when depression and apathy were the primary reasons for consulting the primary care physician. Clinicians can direct questions using assessment scales such as the Structured Interview for Hoarding Disorder, the Clutter Image Rating, and the Hoarding Rating Scale-Interview to assess for HD [5,8,9]. Case Presentation The patient is a 35-year-old male who was brought to his primary care physician (PCP) by his wife with complaints of lethargy and depression. His primary care physician referred him for further evaluation by a psychiatrist. The patient had no past medical history; however, his psychiatric history shows a diagnosis of generalized anxiety disorder from 15 years ago. In the course of the initial evaluation, the patient revealed that he gets irritable and depressed when his wife discards any items in his possession. He stated that his wife had discarded more of his items over the past year, causing him to become progressively depressed. Considering his items to be valuable and likely to be useful in the future, the patient began collecting mail, magazines, tissue rolls, clothing, cooking supplies, and snacks. He has the habit of purchasing in greater quantities than he needs in the expectation that his purchases will prove useful in the future. Despite never using these items, he could not make himself get rid of them. He acknowledges that his habit of excessive acquisition and behavior regarding difficulties in discarding these items is problematic and troubling for his wife as well. While he could not discard his possessions because of their possible use, he did not consider his thoughts about them to be repetitive or distressing, despite the fact the possessions were taking over his living areas. There were objects piled up in every room in his house, including his living room, bedroom, basement, garage, and surrounding areas. The patient has no psychiatric history in the family. A physical examination showed no abnormalities. Manic or psychotic symptoms are denied by the patient. The patient also denied having suicidal thoughts. The patient had no history of alcohol or substance abuse. A mental status evaluation revealed a well-groomed middle-aged man. The patient was alert, aware, and oriented toward his surroundings. His behavior was cooperative and calm throughout the interview, and he made appropriate eye contact. He interacted in a rational manner throughout. His speech was organized and coherent. He described his mood as depressed, and he had a depressed affect. In addition to fair insight, his memory, judgment, and concentration are excellent. His thought process was simple and linear. He was evaluated for hoarding disorder with the Hoarding Rating Scale, and his results were clinically significant ( Table 1). Scoring scale: On a scale of 0-8, 0 is for no problems, 2 is mild, 4 is moderate, 6 is severe and 8 is extreme. Interpretation of HRS total scores [9] -mean for nonclinical samples: HRS total = 3.34; standard deviation = 4.97. Mean for people with hoarding problems: HRS total = 24.22; standard deviation = 5.67. Analysis of sensitivity and specificity suggest an HRS total clinical cut-off score of 14. Criteria for clinically significant hoarding [10] -a score of 4 or greater on questions 1 and 2, and a score of 4 or greater on either question 4 or question 5. Discussion In this study, we describe the case of a patient with hoarding disorder coupled with depression. A progression of accumulation behavior and inability to discard objects was observed in the patient over time. He was referred to a psychiatrist's care and diagnosed with hoarding disorder about ten years after the inception of his symptoms. The delay in visiting a physician and receiving a timely diagnosis is common with HD, resulting in worsening symptoms over time and a detrimental impact on quality of life. In this case, the patient was experiencing emotional distress from the efforts of his wife to clean their home, and his relationship with her was suffering. The patient had good insight into the effects of his hoarding behaviors, but still found it distressing to part with items that had no usefulness to him. He attributed these resentful feelings towards his wife as the cause for his recent depressive and lethargic state. According to a number of studies, people with HD are significantly more likely to suffer from depression than people in the general population [5]. The patient has a history of generalized anxiety disorder, but no previous diagnosis of major depressive disorder (MDD). As part of the diagnosis for HD, the clinician had to rule out the symptoms of hoarding as being better explained by another psychiatric disorder. In this case, it is apparent he is not unable to throw things out due to lack of energy or a depressed state but is instead unable to throw items due to their perceived importance for a future time. As his HD progressed over the last ten years, the consequences of his disorder became more disruptive to his relationships. The patient perceives the attempts of his wife to declutter their home as contributing to his feelings of depressed mood and lethargy. The distress of discarding items in persons with HD is associated with functional changes in the brain in areas regulating anxiety and sadness when compared to controls [11]. Although the precise cause of hoarding disorder is unknown, this statistically significant difference may help explain the distress caused to the patient by his wife's efforts. The lasting effects of his sadness, including behavioral changes not previously seen before and psychomotor retardation, are related to his hoarding disorder but meet the criteria for an independent diagnosis of major depressive disorder. This case illustrates one example of comorbid mood disorder with hoarding disorder. The primary treatment for hoarding disorder is cognitive behavioral therapy (CBT), which includes psychoeducation, motivational interviewing, classic cognitive procedures centered on dysfunctional beliefs, and exposures focusing on sorting and discarding. Some pharmacological therapies could also benefit patients. This patient was scheduled to participate in cognitive behavioral therapy sessions on a weekly basis in order to gain an understanding of the patient's beliefs, educate him, and make him aware of the importance of treatment. The sessions also focused on the cognitive restructuring of his beliefs to curb his compulsive purchasing and guide him on discarding objects from his house. In addition to CBT, fluoxetine 10mg, once daily, is prescribed to manage his depression symptoms. A family therapy session has been suggested for him and his wife. He had a monthly appointment with the psychiatrist to monitor his progress. There was no notable improvement throughout the initial three months of treatment. However, by the fourth and fifth months, the patient had stopped compulsive purchasing. By the end of ten months, he could relieve his house of unnecessary possessions. His depression symptoms had returned to normal levels, and he reported feeling happy and optimistic. Conclusions Hoarding disorder is characterized by a persistent inability to discard or part with possessions, irrespective of their value. They see a need to preserve the items and experience discomfort when confronted with the possibility of discarding them. It is critical to understand that hoarding is not often the major cause for consultation, as was the scenario in this case report, where depression and lethargy were the key reasons for consulting the primary care physician. Overbuying and hoarding multiple items throughout his house has strained his relationship with his wife due to his unwillingness to discard them. Attempts to discard the items from the house have led to depression in our patient. Hence, it is important to recognize that patients with hoarding disorder may also suffer from anxiety or depression that needs to be treated as well. Hoarding can severely impact the lives of affected patients and those around them if it is not diagnosed and treated early. In addition to CBT, interpersonal and family therapy should be considered in order to further improve the patient's relationships with family and friends. There is a need for studies evaluating treatments for HD in order to better the quality of life of the patient and to reduce the possible hazards associated with the disorder. Additional Information Disclosures Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2022-06-06T15:09:59.990Z
2022-06-01T00:00:00.000
{ "year": 2022, "sha1": "b618d45eb5229d5e173d88d6d8f989c026e82d9d", "oa_license": "CCBY", "oa_url": "https://www.cureus.com/articles/99977-a-case-of-coexisting-depression-and-hoarding-disorder.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "59c8e189d1f267429289bcb00755632ec3053869", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [] }
220603292
pes2o/s2orc
v3-fos-license
MITF reprograms the extracellular matrix and focal adhesion in melanoma The microphthalmia-associated transcription factor (MITF) is a critical regulator of melanocyte development and differentiation. It also plays an important role in melanoma where it has been described as a molecular rheostat that, depending on activity levels, allows reversible switching between different cellular states. Here, we show that MITF directly represses the expression of genes associated with the extracellular matrix (ECM) and focal adhesion pathways in human melanoma cells as well as of regulators of epithelial-to-mesenchymal transition (EMT) such as CDH2, thus affecting cell morphology and cell-matrix interactions. Importantly, we show that these effects of MITF are reversible, as expected from the rheostat model. The number of focal adhesion points increased upon MITF knockdown, a feature observed in drug-resistant melanomas. Cells lacking MITF are similar to the cells of minimal residual disease observed in both human and zebrafish melanomas. Our results suggest that MITF plays a critical role as a repressor of gene expression and is actively involved in shaping the microenvironment of melanoma cells in a cell-autonomous manner. Introduction Melanoma is a highly aggressive form of skin cancer that originates from melanocytes. Approximately, 60% of melanoma tumors harbor a BRAF mutation, most often BRAF V600E , which leads to hyperactivation of the mitogen-activated protein kinase (MAPK) pathway (Davies et al., 2002). Drugs targeting the BRAF and MAPK pathways are clinically important, but almost invariably, resistance arises within a short time period (Kugel and Aplin, 2014). Melanoma inherits its aggressive nature from its multipotent neural crest precursors that gives rise to various cells including melanocytes, glia, and adrenal cells (Le Douarin and Kalcheim, 1999;Le Douarin and Dupin, 2018). The developmental programme of neural crest cells is believed to be reinitiated during melanoma progression and dysregulation of neural crest genes is predictive of metastatic potential and negative prognosis in melanoma (Mascarenhas et al., 2010;Bailey et al., 2012;Kulesa et al., 2006). Various different studies, including gene expression studies of tumors, immunohistochemical analysis of melanoma samples and single-cell sequencing studies of patient-derived xenografts suggest the existence of different cell types in melanoma tumors. This cellular heterogeneity is believed to reflect the associated ability of tumor cells to switch their phenotype from proliferative, non-invasive cells to quiescent, invasive cells and back, thus allowing metastasis and the escape from therapeutic intervention (reviewed in Rambow et al., 2019). This has been summarized in the phenotype switching model which suggests that melanoma cells can switch between invasive and proliferative states allowing them to either grow and form tumor or metastasize to a new site (Rambow et al., 2019;Hoek and Goding, 2010). Understanding the molecular mechanisms underlying the phenotypic plasticity of melanoma cells is key to addressing the metastatic potential of melanoma cells. The microphthalmia-associated transcription factor (MITF) is essential for melanocyte differentiation, proliferation, and survival. MITF is also important during melanomagenesis (reviewed in Goding and Arnheiter, 2019). This is best evidenced by the observations that the rare germline mutation E318K of MITF increases the susceptibility to melanoma and MITF has been shown to be amplified in 15% of melanoma tumors Garraway et al., 2005;Yokoyama et al., 2011). Importantly, MITF activity has been used as a proxy for the phenotype switching model with MITF high cells characterized as proliferative, whereas MITF low cells have been assigned a quiescent invasive phenotype (Carreira et al., 2006;Hoek et al., 2006). In fact, MITF has been proposed to act as a rheostat where the levels of MITF activity determine the phenotypic state of melanoma cells (reviewed in Rambow et al., 2019). Since MITF expression and activity are regulated by the various signaling pathways, the tumor microenvironment has been proposed to instruct phenotypic changes in melanoma cells and thus foster disease progression (Feige et al., 2011;Miskolczi et al., 2018;Widmer et al., 2013;Riesenberg et al., 2015). However, antibody staining suggests that cells lacking MITF are abundant in melanomas (Goodall et al., 2008) and single-cell sequencing of human xenotransplants and of zebrafish melanoma models suggest the existence of cells with very low MITF expression (Rambow et al., 2018;Travnickova et al., 2019). These cells belong to a population of cells believed to represent minimal residual disease, cells that remain viable upon drug exposure. The extracellular matrix (ECM) is an important component of the tumor microenvironment as it provides cells with biochemical and structural support. In melanoma, expression of ECM proteins such as tenascin and fibronectin increases during disease progression (Frey et al., 2011). Focal adhesions not only offer physical attachment of cells to the ECM through the integrin receptor, but also initiate signaling cascades that regulate cell proliferation, migration, and survival (Mitra et al., 2005;Geiger et al., 2001;Playford and Schaller, 2004). A key focal adhesion signaling protein is Focal Adhesion Kinase (FAK), which activates the ERK pathway via Grb-FAK interactions (Schlaepfer et al., 1999). An important scaffolding protein at the focal adhesion complex is Paxillin (PXN) which recruits other proteins to the focal adhesion sites when phosphorylated by FAK and SRC (Deakin and Turner, 2008). Importantly, phosphorylation of PXN is critical for activation of RAF, MEK, and ERK and has been shown to confer drug resistance by activating Bcl-2 through ERK signaling (Wu et al., 2016;Sen et al., 2010;Ishibe et al., 2003;Sen et al., 2012;Hirata et al., 2015). This highlights the importance of identifying a molecular mechanism that confers cells with the ability to circumvent drug inhibition through phenotypic changes. In this study, we show that MITF represses the expression of focal adhesion and ECM genes in melanoma cells and tissues. Our findings reveal a new role for MITF in regulating the expression of genes that are essential for creating the melanoma microenvironment, establishing a link to melanoma progression and drug resistance. Melanoma cells devoid of MITF are enlarged and exhibit altered matrix interactions To assess the effects of permanent loss of MITF in melanoma cells, we used the clustered regularly interspaced short palindromic repeats (CRISPR)-Cas9 technique to generate MITF knockout (KO) cell lines in the human hypo-tetraploid SkMel28 melanoma cell line (containing four copies of MITF). We targeted exons 2 (an early exon containing a transactivation domain) and 6 (containing the DNAbinding domain) of MITF separately and the resulting isogenic cell lines are hereafter referred to as DMITF-X2 and DMITF-X6 (Figure 1a). The control cell line EV-SkMel28 was generated by transfecting SkMel28 cells with Cas9 along with the empty gRNA plasmid. To identify mutations introduced in the cell lines, we performed whole genome sequence (WGS) analysis, which showed that mutations were introduced in MITF in both the DMITF-X2 and DMITF-X6 cells (Figure 1b,c) but not in the EV-SkMel28 control. In addition, we confirmed the WGS analysis by amplifying the mutated genomic regions, cloning them into vectors and performing Sanger sequencing. The DMITF-X2 line had two different but independent insertion mutations in the same codon (insertion of A and T in the codon for Y22) and a 5 bp deletion (encoding Y22 and H23) that are present in 64%, 19%, and 17% of sequenced DNA fragments in this region, respectively. All these mutations introduced frameshifts and premature stop codons in exon 2 of MITF ( Figure 1b). Sanger sequencing of DNA clones containing PCR amplified cDNA fragments from the MITF-gene of DMITF-X2 cells verified that the DMITF-X2 cells have the same mutations at similar frequency as observed in the WGS data (Figure 1-figure supplement 1c). The mutations present in the DMITF-X6 line are the following: 52% of the sequenced fragments contained a deletion of 1 bp (encoding residue A198), 33% contained a 6 bp in-frame deletion in the basic domain of the protein (encoding residues R197-R198), and 15% of the sequenced fragments contained a 17 bp deletion (encoding residues 198-203). Both the 1and 17 bp deletions introduced frameshifts and downstream stop codons (Figure 1c), whereas the in-frame 6 bp deletion removed two amino acids at the beginning of the alpha-helix encoding the basic domain and is therefore not expected to be able to bind DNA. No wild-type MITF gene was detected in either cell line. In both cell lines, the ratio of mutants is consistent with two chromosomes carrying the same mutation and the remaining two chromosomes each carrying a different mutation. Western blotting revealed that the DMITF-X6 cells express very little, if any, MITF protein. Although the DMITF-X2 cells did not express the full-length~55 kDa MITF protein, truncated forms of MITF were detected at~40 and 47 kDa ( Figure 1d). These truncated forms were also present in wild-type cells, albeit at lower levels, suggesting that these are alternative isoforms of the MITF protein ( Figure 1d). In order to determine if these shorter isoforms are due to alternative splicing, we performed RT-PCR across several exon-intron borders around exon 2 of the MITF transcript. Our results did not show any alternative splice forms of MITF ( Figure 1-figure supplement 1a-b). Similarly, neither the WGS nor RT-PCR studies showed evidence for transcripts lacking exon2 from the DMITF-X2 cDNA indicating that the truncated MITF proteins are most likely products of alternative translation start sites ( Figure 1-figure supplement 1c). The C5 MITF antibody used here recognizes an epitope located between residues 120 and 170 of MITF, which corresponds to exons 4 and 5 ( Figure 1a; Fock et al., 2019). The truncated proteins observed in wild type and DMITF-X2 cells must still contain this region and are therefore likely to arise from alternative translation start sites. Immunostaining revealed a mostly nuclear staining of MITF in both the EV-SKmel28 and DMITF-X2 cells (Figure 1e), indicating that the truncated MITF isoforms reside in the nucleus. However, in the DMITF-X6 cells, no signal for MITF was observed in the nucleus, whereas a very low background signal was observed in the cytoplasm (Figure 1e). To summarize, we have generated the DMITF-X6 cells that are devoid of wild type MITF and the DMITF-X2 cells that carry a hypomorphic mutation. Below, we refer to both as CRISPR MITF-KO cell lines due to the way they were generated. Morphological analysis revealed that both MITF-KO cell lines exhibited enlarged cytoplasm as compared to controls (Figure 1e-g). Vimentin staining revealed enlarged cells (Figure 1e), which is consistent with a report showing that loss of MITF affects the cytoskeletal structure and shape of melanoma cells (Carreira et al., 2006). Quantification of phase-contrast microscopy images revealed that the average size of the DMITF-X2 and DMITF-X6 cells was 1.7-fold larger than the EV-SkMel28 cells (Figure 1g). To characterize the behavior of the cell lines when provided with ECM that mimics the basement membrane, we seeded the cells on top of matrigel-coated slides, supplemented with complete growth medium containing 2% matrigel. Both MITF-KO cell lines formed aggregates, whereas the control EV-SkMel28 cells displayed a flat sheet-like morphology (Figure 1h). Taken together, our results show that loss of MITF lead to changes in cell morphology and cell-matrix interactions. Expression of ECM and focal adhesion genes is increased upon loss of MITF Next, we compared the transcriptomic profile of the DMITF-X6 cells (exhibiting complete loss of wild type MITF) to the EV-SkMel28 control cells. We identified 2136 differentially expressed genes (DEGs) between DMITF-X6 and EV-SkMel28 cells with the cut off qval <0.05 (Supplementary file 1). Of these, 1516 genes showed twofold change in expression ( Figure 2a). Gene ontology and KEGG pathway enrichment analysis revealed that the genes reduced in expression upon MITF depletion were verified MITF-target genes involved in pigmentation and pigment cell differentiation such as DCT, MLANA, OCA2, and IRF4 in addition to MITF itself (Figure 2a,b, Supplementary file 1). Genes whose expression was increased upon loss of MITF were enriched in processes involved in glycosaminoglycan metabolism, ECM organization and extracellular structure organization, and included genes such as SERPINA3, ITGA2, PXDN, and TGFb1 (Figure 2a,b, Supplementary file 1). In order to investigate if the genes affected by MITF loss are direct targets of MITF, we used the CUT-and-RUN method to map protein-DNA interactions (Skene and Henikoff, 2017). Briefly, a chromatin isolate was incubated with an antibody against MITF and then Protein A/G fused with Micrococcal nuclease (MNase) was added to cut the DNA that is recognized by the target antibody. The resulting DNA fragments were then sequenced (Skene and Henikoff, 2017). We used an anti-MITF antibody (i.e. MITF CUT-and-RUN) in the SkMel28 cells to map MITF genome-wide binding sites (Meers et al., 2019). We identified 37,643 peaks located 10 kb +/-from the TSS, 3'UTR or intronic regions of 8288 genes (Figure 2-figure supplement 1a; Supplementary file 2). Gene ontology analysis revealed that among genes associated with MITF CUT-and-RUN peaks (i.e. MITF peaks), those which showed increased expression upon MITF loss were enriched for aminoglycan, ECM, and axogenesis pathways, whereas those with reduced expression upon MITF loss were enriched for genes involved in pigmentation ( Figure 2d). We found that 695 of the 1284 induced genes (p<7.3e-09 hypergeometric test) and 535 of the 852 repressed genes (p<6.6e-23, hypergeometric test) associated with MITF peaks (Figure 2e; Supplementary file 2). Of the 183 ECM and focal adhesion genes whose expression was increased upon MITF knockout, 101 were associated with MITF peaks and induced in expression upon loss of MITF (Supplementary file 2). We compared our MITF CUT and RUN peaks with the published MITF ChIP-seq data from COLO829 (generated using the same antibody as used here) ( To determine if the MITF peaks near induced and reduced genes contained the canonical MITFbinding sites 5'-TCACGTG-3' or 5'-TCATGTGA-3', we performed de novo motif analysis of MITFbound regions near DEGs using the MEMEChIP tool (Ma et al., 2014). We found that MITF peaks MITF depletion leads to increased expression of ECM genes In order to verify that the link between MITF and the ECM and focal adhesion genes is not restricted to a particular cell line, we performed knockdown and overexpression studies in independent human melanoma cell lines and characterized gene expression data in the Cancer Genome Atlas. First, we performed mRNA sequencing after transient knockdown of MITF in SkMel28 and 501Mel cells, both of which express MITF endogenously at high levels. We identified 1040 DEGs (q-val <0.05, log2FC!| 1|, 567 induced, 473 reduced) upon siRNA-mediated MITF depletion in SkMel28 cells compared to siCTRL and 1114 DEGs in 501Mel cells (q-val <0.05, log2FC!|1|, 624 induced, 490 reduced) (Supplementary file 1). A significant correlation was observed between the DEGs of DMITF-X6 vs. EV-Skmel28 cells and DEGs of siMITF vs. siCTRL in SkMel28 (Pearson correlation = 0.66, p<2.2e-16) and 501Mel cells (Pearson correlation = 0.57, p<2.2e-16) (Figure 3a,b). Second, we used the Cancer Genome Atlas dataset to characterize differential gene expression and split the tumors into two groups: the tumors with the 10% highest (MITF high ) and 10% lowest (MITF low ) expression of MITF. By performing differential gene expression analysis between the two groups, we identified 2655 DEGs (FDR < 0.01, log2FC!|1|, 1835 induced and 820 reduced) between MITF low and MITF high tumors (Supplementary file 1). Interestingly, the DEGs observed when comparing the DMITF-X6 cells to the EV-SkMel28 cells and the DEGs observed upon comparing the MITF low and MITF high tumors were significantly correlated (R = 0.76, p<2.2e-16) ( Figure 3c). Additionally, principal component analysis using the top 200 most statistically significant genes in each case revealed that MITF low tumors cluster near the DMITF-X6 cells, whereas MITF high tumors cluster with EV-SkMEL28 cells, indicating that DMITF-X6 cells portray the transcriptional state of MITF low tumors ( Figure 3e). Third, we investigated whether overexpression of MITF would lead to repression of ECM genes. To do this, we performed mRNA-sequencing in A375P cells overexpressing a dox-inducible FLAG-tagged MITF construct (pB-MITF-FLAG). A control A375P cell line was generated using an empty vector only expressing FLAG (pB-FLAG). We identified 8110 DEGs (qval <0.05, log2FC!|1|, 4863 induced, 3247 reduced) between pB-MITF-FLAG and pB-FLAG in A375P cells and among genes that are decreased in expression are ECM-related genes (Supplementary file 1). As expected, the DEGs observed upon MITF overexpression in A375P cells showed anti-correlation with the DEGs observed when comparing DMITF-X6 to EV-SkMel28 cells (Pearson correlation = À0.46, p<2.2e-16) ( Figure 3d). Figure 2 continued reduced and induced DEGs of DMITF-X6 vs. EV-SkMel28. p Value is red lowest to blue highest; gene ratio is the ratio between genes and all genes in the GO category; reduced: genes reduced in expression in DMITF-X6 compared to EV-SkMel28; induced: genes induced in expression in the DMITF-X6 compared to EV-SkMel28. (d) GO BP analysis of MITF CUT and RUN peaks associated genes were plotted using Clusterprofiler (Yu et al., 2012) in R; All: MITF CUT and RUN peak-associated genes, induced and reduced: Induced or reduced DEGs of DMITF-X6 vs. EV-SkMel28 cells based on MITF CUT and RUN peak presence on their gene promoter or distal region binding. (e) Venn diagram showing the overlap between MITF targets identified from MITF CUT and RUN with induced, reduced, ECM and focal adhesion DEGs of DMITF-X6 vs. EV-SkMel28 cells. (f) Venn diagram displaying the common overlap between MITF ChIP-seq targets in different cell lines and differentially expressed ECM and focal adhesion genes in DMITF-X6 vs. EV-SkMel28 cells. (g) Heatmap showing the differentially expressed ECM genes in DMITF-X6 vs. EV-SkMel28 cells that are commonly bound by MITF across different MITF CUT and RUN data sets. Zcore converted TPM value from RNA-seq data was used for plotting. To classify genes that are overrepresented after the loss or gain of MITF, we performed GO term enrichment analysis on the DEGs, which revealed an induction of ECM-related genes upon MITF depletion in 501Mel and SkMel28 cells as well as in MITF low tumors, whereas genes involved in pigmentation were reduced in expression ( Figure 3f). Conversely, overexpressing MITF in the A375P cell line led to a reduction in expression of ECM genes and induction of pigmentation and autophagy genes, again showing that MITF negatively regulates the expression of ECM genes ( Figure 3f). Analysis of the MITF ChIP-seq data (Laurette et al., 2015) showed that a significant portion of the differentially expressed ECM genes upon MITF KD and in MITF low tumors have MITF peaks in their regulatory domains (Supplementary file 5; Figure 3g). In contrast, overexpression of MITF led to the repression of 213 ECM genes, 82 of which were direct MITF targets, indicating a major repressive influence of MITF on ECM gene expression ( Figure 3g; Supplementary file 5). We confirmed the repressive effects of MITF by RT-qPCR in dox-inducible A375P cells overexpressing pB-MITF-FLAG, which showed a significant reduction in the expression of LOXL2, MMP15, MMP2, and COL1A2 when compared to control pB-FLAG cells ( Figure 3h). Together, our data support our conclusion that MITF is an important direct repressor of ECM gene expression in human melanoma cells and tissues. Next, we analyzed whether the collagens that were differentially expressed in the MITF-KD or KO melanoma cell lines were also affected by MITF in melanoma tumors in TCGA. Interestingly, using GSEA analysis, we observed increased enrichment of ECM genes in the MITF low tumors ( Figure 3i). However, to rule out the possibility that the increased expression of ECM genes in TCGA MITF low melanoma tumors was derived from fibroblast cells, we removed the 130 melanoma TCGA samples that showed the highest expression of the fibroblast markers PDGFRB and ACTA2 and then assessed the expression of collagens across the 30 MITF highest and lowest melanoma samples, which consistently showed that expression of genes encoding ECM proteins are strongly enriched in MITF low tumors ( Figure 3i). The enrichment for ECM genes was observed in both primary and metastatic tumors ( Figure 3i). We did not observe a correlation between MITF expression and the most common BRAF, NRAS, and NF1 mutations found in melanoma (Figure 3j), indicating that the gene expression changes observed are controlled via transcriptional regulation, directly or indirectly imposed by MITF. We conclude that reduced MITF expression leads to activation of expression of genes involved in ECM and focal adhesion in melanoma cells and tumors and that in many cases this is through direct binding of MITF to their regulatory regions. EMT genes are directly regulated by MITF Genes involved in the epithelial-to-mesenchymal transition (EMT) process have been shown to play a role in melanoma drug resistance and have been linked to low MITF expression (Denecker et al., 2014;Caramel et al., 2013). Many EMT-inducing transcription factors, including SNAI2 and ZEB1, repress CDH1 (E-cadherin) (Moreno-Bueno et al., 2008). We tested whether melanoma tumors in TCGA that have high MITF expression also express important EMT regulators. While SNAI2 was shown to be positively correlated with MITF, the expression of CDH1 did not correlate with the expression of SNAI2 in the TCGA melanoma tumors which is consistent with the published findings in melanoma and melanocyte samples (Shirley et al., 2012), despite CDH1 being a canonical target of SNAI2 (Cano et al., 2000). Thus, E-cadherin is likely directly regulated by MITF and ZEB1. As expected the analysis of the TCGA data showed that the expression of CDH2 (N-cadherin) (R = À0.3, p<1.9e-11), TGFB1(R = À0.49, p<2.2e-16) and ZEB1 (R = À0.41, p<2.2e-16) was anti- Error bars indicate standard error of the mean (* p value <0.05) was calculated using paired t-test. (i) Gene set enrichment analysis using ECM genes differentially expressed between DMITF-X6 and EV-SkMel28 cells in the top 30 MITF low and 30 MITF high samples with high fibroblast marker removed Primary, and metastatic melanoma from TCGA were analyzed separately. (j) Percentage of mutations in the indicated genes in the MITF low and MITF high tumors from fibroblast-free, primary and metastatic TCGA tumor samples, respectively. The online version of this article includes the following source data for figure 3: Source data 1. ECM gene expression quantified by qPCR in A375P cells. correlated with MITF in melanoma tumors, whereas the expression of CDH1 (R = 0.42, p<2.2e-16) and SLUG (SNAI2) (R = 0.43, p<2.2e-16) was positively correlated (Figure 4a). Consistent with this, the expression of the CDH1 and SNAI2 genes was reduced in MITF low tumors and DMITF-X6 cells, whereas the expression of CDH2, SOX2, TGFß1, and ZEB1 was increased (Figure 4b). We also observed increased expression of CDH2 upon siRNA-mediated KD of MITF in SkMel28 and 501Mel cells; however, the level of CDH1 was decreased only in the siMITF SkMel28 cells (Figure 4b). Interestingly, upon MITF overexpression in the pB-MITF-FLAG A375P cells, the expression of CDH2, SNAI2, SOX2, and TGFß1 was decreased, whereas the expression of CDH1 and ZEB1 was increased ( Figure 4b). RT-qPCR analysis of EMT genes in the MITF-KO cells confirmed that CDH1 expression was reduced 50-and 100-fold in the DMITF-X2 and DMITF-X6 cells, respectively, whereas CDH2 and TGFß1 were significantly increased when compared to EV-SkMel28 cells (Figure 4c). Western blot analysis confirmed increased expression of the classical EMT marker protein CDH2 and decreased expression of CDH1 in both MITF-KO cell lines (Figure 4d,e). Analysis of CUT and RUN and publicly available MITF ChIP-seq data showed that ZEB1, SOX2, CDH1, and CDH2 genes contain MITF-binding peaks in their intronic and promoter regions (Figure 2-figure supplement 1d; Laurette et al., 2015), whereas TGFß1 does not. This suggests that MITF is not only involved in regulating the expression of ECM genes but may also be directly involved in regulating the expression of EMT genes, resulting in EMT-like changes in cell morphology and behavior. MITF-mediated effects on ECM genes are reversible The MITF rheostat model predicts that different levels of MITF activity modulate distinct phenotypic states of melanoma cells and that these effects are reversible (Lister et al., 2014). To determine if the effects of long-term MITF knockout could be reversed, we performed a rescue experiment by introducing an exogenous MITF-FLAG or EV-FLAG construct into the MITF-KO cells and then used RT-qPCR to characterize the expression pattern of ECM genes. As expected, the control EV-FLAG transfected MITF-KO cells exhibited increased expression of the ECM genes CDH2, ID1, and MMP15 as compared to the EV-SkMel28 control cells (Figure 5a-d), whereas the expression of CDH1 was reduced. Importantly, the expression of all four genes was partially rescued upon introducing the MITF-FLAG construct into DMITF-X6 cells; a smaller rescue effect was observed in DMITF-X2 cells transfected with MITF-FLAG (Figure 5a-d). In order to overcome the partial rescue seen with the MITF-KO cells, we used the piggybac transposon system to integrate a dox-inducible synthetic micro-RNA construct (miR-MITF) into 501Mel and SkMel28 cells, thus allowing inducible knockdown (KD) of MITF by addition of doxycycline (Figure 5e). At the same time, cells carrying a non-targeting control (miR-NTC) were generated. We induced MITF-KD in the miR-MITF SkMel28 cell line by adding dox and removed it again after 24 hr to assay for gene and protein expression at defined time points (Figure 5e). We chose to focus on the ECM and EMT genes CDH1, CDH2, ITGA2, and SERPINA3, all of which are direct targets of MITF (Laurette et al., 2015). Our results showed that MITF mRNA and protein expression was significantly decreased after 24 hr of dox treatment and reached basal levels again 96 hr after dox removal (Figure 5f,g), showing that the dox-inducible system is suitable for reversibly modulating MITF levels. We observed a sharp decrease in CDH1 mRNA expression after 24 hr of dox treatment. However, 72 and 96 hr after dox removal its expression had gradually increased, consistent with the restoration of MITF expression (Figure 5g). Similarly, the expression of genes repressed by MITF such as CDH2, SERPINA3, and ITGA2 was sharply increased after 24 hr of dox treatment and decreased again 96 hr after dox removal (Figure 5h-j). Western blotting showed that the expression of the E-cadherin (CDH1) protein was reduced, whereas the expression of N-Cadherin (CDH2) was increased when compared to the miR-NTC control (Figure 5f). After 72 and 96 hr of dox removal, MITF expression was restored and expression of the E-Cadherin protein was increased back to initial levels, whereas the expression of N-Cadherin was reduced compared to that observed at 24 hr of MITF KD (Figure 5f). These data show that, consistent with the rheostat model, the function of MITF as both a repressor and activator of gene expression has reversible effects on the expression of EMT and ECM genes. MITF affects the number of focal adhesions Based on the observed increase in the expression of ECM and focal adhesion genes, we expected focal adhesion formation to be affected in the MITF-depleted cells. Indeed, immunostaining revealed an increased number of paxillin (PXN)-positive focal points (stained using PXN phospho -Tyr118 antibodies) around the cell periphery of MITF-KO cells as compared to EV-SkMel28 control cells ( Figure 6-figure supplement 1a). Quantification of the focal points showed around twofold increase in their numbers in both MITF-KO cell lines ( Figure 6-figure supplement 1b). Transcriptomic data of the 473 melanoma tumor samples from TCGA showed a significant negative correlation between the expression of MITF and PXN in these samples ( Figure 6-figure supplement 1c). We also assessed the expression of PXN in a panel of 163 patient-derived melanoma cells exhibiting different levels of MITF. This showed that expression of PXN was specifically induced in MITF low melanoma cell lines and displayed a negative correlation with MITF expression (Figure 6-figure supplement 1d,e). In order to evaluate whether the formation of focal adhesions would be induced upon short-term MITF loss, we integrated the dox-inducible miR-MITF transgene into 501Mel and SkMel28 cells and detected focal adhesions using the PXN antibody. After a 24 hr induction of MITF KD, a twofold increase was observed in the number of PXN-positive focal points at the cell borders when compared to the miR-NTC control cell lines ( Figure 6-figure supplement 1f-h). Analysis of ChIP-seq data showed an MITF peak in intron 6 of PXN containing the CACGTG motif ( Figure 6-figure supplement 1i). This indicates that MITF affects the formation of focal adhesions by directly regulating the expression of PXN, a key player in focal adhesion. Previous studies have shown that adaptive resistance to the BRAF V600E inhibitor vemurafenib leads to activation of focal adhesion and ECM-related pathways (Fallahi-Sichani et al., 2017). Indeed, treating the cells with vemurafenib led to a decrease in MITF protein expression in EV-SkMel28 cells which is consistent with the literature. However, the expression of MITF in the 501Mel cell upon vemurafenib treatment was increased compared to a DMSO control ( Figure 6-figure supplement 2a,b). This raises the question of whether the effects observed on ECM and focal adhesion genes upon BRAF inhibition are mediated through MITF. To evaluate the effects of BRAF inhibition on focal adhesions, we treated MITF-KO and EV-SkMel28 cells with vemurafenib and stained for phospho-paxillin (Tyr118). Consistent with the observation above, the MITF-KO cells showed a fourfold increase in the number of focal adhesions as compared to EV-SkMel28 cells under the control DMSO-treated conditions (Figure 6a (upper panel), b). Treatment with vemurafenib resulted in a significant increase in the number of focal adhesions in the EV-SkMel28 cells but a further increase was also observed in the MITF-KO cells (Figure 6a [lower panel], b). Consistent with this, knockdown of MITF induced through the miR-MITF construct in both 501Mel and SkMel28 cells led to an increased number of focal adhesions when compared to miR-NTC cells (Figure 6c,d [upper panels], e, f). Treatment with vemurafenib further increased the number of focal adhesions in SkMel28 cells expressing miR-NTC or miR-MITF, but again, more focal points were observed in miR-MITF cells under these conditions (Figure 6c,d [lower panels]). Importantly, vemurafenib treatment alone did not lead to an increase in focal adhesion formation in 501Mel cells expressing miR-NTC which is consistent with increased MITF protein expression upon vemurafenib treatment, whereas a major increase in focal adhesions was observed upon MITF depletion in the miR-MITF cells (Figure 6c [lower panel], e). These results suggest that the formation of focal adhesions upon vemurafenib treatment is in part dependent on changes in MITF expression. However, since a further increase is observed in upon vemurafenib treatment of the knockout and knockdown lines, other factors must also be involved. Since an increase in formation of focal adhesions is a marker for MITF loss, we sought to inhibit formation of focal adhesion using increasing concentration of FAK inhibitor in MITF-KO and EV- SkMel28 cells. We observed that after 72 hr of treatment with 10 mM of FAKi the viability of MITF-KO cells was significantly reduced compared to the EV-SkMel28 cells; the effect of FAKi on DMITF-X6 cells was more pronounced than in DMITF-X2 cells (Figure 6g,h). This suggests that the survival of MITF-KO cells is dependent on FAK signaling. To understand whether the ECM and focal adhesion genes affected upon MITF loss overlap with the gene signature of melanoma cells that have been treated with BRAF inhibitors, we used singlecell RNA-sequencing data of human melanoma xenografts (Rambow et al., 2018). We focused on gene signatures specific for single-cell populations with low MITF, (i) a subpopulation of cells which represent minimal residual disease (MRD) in melanoma, a small population of cells that remain upon drug treatment and (ii) an invasive gene signature (Rambow et al., 2018). Our GSEA analysis showed that DMITF-X6 cells were significantly enriched in the MRD gene signature but not with the invasive signature found in another sub-population of MRD cells in the xenografts (Figure 6-figure supplement 2c). Among the genes that overlap between the MRD and DMITF-X6 cells are ECM genes such as COL4A1, ITGA1, ITGA6, LAMC1, and VCAN. The same findings were obtained using single cell RNA-seq data of MITF-depleted zebrafish melanomas as well as bulk-RNA-seq data of MITF low melanoma tumors (Travnickova et al., 2019). Both datasets showed positive enrichment with DMITF-X6 cells (Figure 6-figure supplement 2d). Importantly, we found that in the zebrafish data the ECM signature was specifically induced in the single cell cluster from MITF-low superficial tumors (representing minimal residual disease) compared to other single-cell clusters from MITFhigh melanomas ( Figure 6-figure supplement 2e). These results suggest that the loss of MITF is an important mediator of MRD in melanoma and that MRD cells alter their extracellular environment. MITF KO affects proliferation and migration The rheostat model predicts that MITF loss should reduce cell proliferation but increase migration potential of melanoma cells. We therefore measured proliferative ability of the MITF-KO cells using different methods. First, we characterized cell confluency over time using IncuCyte live cell imaging. This showed that both of the MITF-KO cells had a twofold reduction in proliferation rate as compared to the EV-SkMel28 cells (Figure 7a). Second, a BrdU incorporation assay showed that DMITF-X6 and DMITF-X2 cells had fewer (20-25%) BrdU-positive cells than the EV-SkMel28 cells (45%), suggesting that there are fewer actively proliferating cells in the MITF-KO cells compared to the control cells (Figure 7b). Previous analysis has shown that knocking down MITF leads to increased migration ability of melanoma cells (Carreira et al., 2006;Giuliano et al., 2010;Cheli et al., 2012;Javelaud et al., 2011;Bianchi-Smiraglia et al., 2017;Falletta et al., 2017). We therefore characterized the migration ability of our knockout cells. Strikingly, the wound scratch assay showed that the MITF-KO cells failed to close the wound in 24 hrs, whereas the EV-SkMel28 cells were able to close the wound within that time (Figure 7c,d). To test whether the effects on migration were due to the long-term depletion of MITF in the MITF-KO cells, we performed the wound scratch assay upon MITF KD in the miR-MITF cells. Upon MITF-KD, we observed a minor decrease in the ability of the cells to close the wound when compared to the control miR-NTC cells (Figure 7e,f). Next, we assessed the invasion ability of the MITF-KO cells using transwell chambers coated with matrigel. Interestingly, we found that MITF-KO cells displayed a severe reduction in invasion ability compared to the EV-SkMel28 cells ( Figure 7g,h). Taken together our data suggests that knocking down MITF negatively influences both cell proliferation and migration ability of the cells. Discussion In this study, we have shown that MITF directly binds to and represses the expression of ECM, EMT and focal adhesion genes in human melanoma cells. We first observed this using our MITF-KO cells but verified our observations in other cell models by overexpression and knockdown of MITF using siRNA and inducible microRNA against MITF (miR-MITF) in melanoma cells. Importantly, we showed that MITF low tumors in humans as well as in zebrafish have increased expression of ECM and focal adhesion genes. Together, our findings indicate that MITF acts as a transcriptional repressor of genes involved in ECM and focal adhesion. A role for MITF as a repressor has been described in both melanoma cells and immune cells (Riesenberg et al., 2015;Hu et al., 2007). In myeloid precursor cells, MITF was shown to interact with EOS to recruit co-repressors to target genes (Hu et al., 2007), whereas in melanoma cells MITF bound directly to an E-box located in an enhancer of the c-JUN gene, leading to reduced expression of the gene (Riesenberg et al., 2015). Our results show that many of the genes whose expression is repressed by MITF are bound by MITF and contain E-boxes in their regulatory regions (Figure 2h). This suggests that direct binding of MITF is involved in their repression. Since we observed differences in secondary motifs between the repressed and activated genes, different co-factors may be involved in mediating the repression in each case. The MITF-dependent rheostat model predicts that high MITF activity promotes proliferation, whereas low activity promotes invasion (Carreira et al., 2006). Consistent with the rheostat model, proliferation was severely reduced upon MITF knockout (Figure 7a,b). Unexpectedly, however, the migrative and invasive properties were reduced in both MITF-KO and MITF-KD (miR-MITF) cells (Figure 7c-h). Immunohistochemistry and single-cell sequencing studies of melanoma tumors have shown the existence of cells with low or no MITF expression (Goodall et al., 2008;Rambow et al., 2018;Travnickova et al., 2019). The involvement of MITF in migration has mostly been characterized using knockdown studies in melanoma cell lines using either siRNA or shRNA and by using Matrigel-coated Boyden chambers (Carreira et al., 2006;Bianchi-Smiraglia et al., 2017;Cheli et al., 2011); in these studies, knocking down MITF resulted in increased migration properties. Cheli et al., 2012 also injected melanoma cells into the tail vein of mice and showed increased formation of metastasis when MITF was knocked down. Two different pathways involved in migration were shown to be regulated by MITF; DIAPH1, a gene implicated in actin polymerization (Carreira et al., 2006), and the guanosine monophosphate reductase (GMPR) gene encoding an enzyme involved in regulating intracellular GTP levels (Bianchi-Smiraglia et al., 2017). Surprisingly, more recent studies by Falletta et al., 2017 andVlcˇková et al., 2018 failed to observe any effects on migratory/invasive properties upon MITF knockdown using the same cell lines as were used in the previous studies. Falletta et al., 2017 suggested that the translation factor eIF2a was needed along with the nutrient sensor ATF4 to mediate invasion of melanoma cells (Falletta et al., 2017). Clearly, the experimental system used and additional triggers such as nutrient limitations might play a role in mediating the invasive phenotype. Interestingly, knocking down SMAD7 in melanoma cells resulted in a dual invasive-proliferative phenotype without affecting MITF expression (Tuncer et al., 2019). Thus, we might speculate that the migratory phenotype is a transient event and that in order to achieve migration the cells need to be tested at a narrow time interval where MITF activity is decreasing. Carmit Levy's group has recently shown MITF oscillations upon UV exposure in order to synchronize stress response and pigmentation (Malcov-Brog et al., 2018). It is possible that such a mechanism exists, for example during melanoblast development where oscillating MITF expression might ensure that proliferation and migration can both take place but not at the same time. Previous work has suggested oscillations in the dependence of melanoblasts on the receptor tyrosine kinase KIT (Yoshida et al., 1996;Hou et al., 2000). It is plausible that activation of invasion genes requires discontinuous presence of MITF along with other co-factors at specific time windows. According to the phenotype switching model of melanoma the reversible switch between proliferative and invasive state is needed for melanoma progression (Arozarena and Wellbrock, 2019). In our case, the complete loss of MITF might trap the cells in a state where the MITF switch is dead and migration therefore not possible. This would then explain why we do not see the invasive gene signature and no changes in the expression of DIAPH1, p27, or GMPR. Our analysis of published transcriptomic signatures of melanoma cells and tumors validated that upon MITF loss the expression of differentiation and proliferation genes was diminished and the expression of drug resistant and neural crestlike program was enriched (Figure 2c). Importantly, comparing our gene signature with that of 86 melanoma cells showed that the DMITF-X6 cells displayed a more prominent loss of proliferative genes than a gain of invasive genes (Figure 2c). Similarly, Rambow et al., 2015 showed that melanoma cell lines (G1, T1, 501Mel, MNT-1, SKMel3) which express high levels of MITF displayed a high proliferation potential but less migratory, invasive, and subcutaneous tumor growth capabilities than cell lines with low MITF expression (WM1366, WM793, WM852, LU1205, A375M) . The expression of TRIM63 and CAPN3 was over-represented in the cell lines with high MITF expression and knocking down MITF led to a reduction of their expression. Interestingly, knocking down either TRIM63 or CAPN3 enhanced the invasive potential of 501Mel cells. In the DMITF-X6 cells, we observed reduction only of TRIM63 expression . Thus, the transcriptome of our DMITF-X6 cells does not fully recapitulate the gene signature of invasive cells. We identified MITF as an important transcriptional regulator of ECM and focal adhesion genes. Interestingly, we observed increased expression of TGFß1, encoding an important regulator of ECM-related genes, in the MITF-KO cells and MITF low melanoma tumors (Figure 4b,c). It has been shown that TGFß1 supresses the expression of MITF in melanoblasts, thereby inhibiting differentiation into melanocytes (Nishimura et al., 2010). This autocrine signaling of TGFb is retained in melanoma cells (Javelaud et al., 2008). According to Hoek et al., 2006, the MITF low transcriptional state is dictated by TGFß1 signaling, which can suppress MITF expression resulting in an invasive and drug-resistant phenotype Miskolczi et al., 2018). This suggests that the genes induced upon MITF loss are partly due to induction of TGFß signaling. However, our results suggest that MITF is directly involved in mediating the observed effects on the expression of ECM and focal adhesion genes. In addition, the relationship between MITF and the expression of TGFß1 is not clear. Our observations suggest that knocking down MITF leads to a major increase in TGFß1 mRNA expression in the melanoma cells, suggesting that the effects are cell-autonomous and driven by MITF. However, there are no MITF-peaks in or near the TGFß1 gene in melanoma cells, leading us to hypothesize that the effects must be mediated through a hitherto unknown intermediary. Enhanced expression and phosphorylation of paxillin has been linked to therapy resistance in other cancer cell types, such as lung cancer (Wu et al., 2016). In melanoma, an inverse relation between BRAF inhibition and the expression of ECM genes has been described as a marker of dedifferentiated drug-resistant cells (Fallahi-Sichani et al., 2017). Our data showed that the number of paxillin-positive dots was induced in both MITF-KO and miR-MITF cells as compared to controls (Figure 6-figure supplement 1a,b,f,g h) and paxillin expression was inversely correlated with MITF expression in melanoma tissues and cell lines ( Figure 6-figure supplement 1c-e). Interestingly, we found that treating cells devoid of MITF with a BRAF inhibitor resulted in an increase in formation of focal adhesions (Figure 6a-f). It is worth mentioning that an increase in the number of focal adhesions was restricted to SkMel28 melanoma cells in which MITF protein level was reduced upon vemurafenib treatment (Figure 6d,f, Figure 6-figure supplement 2a, b). However, we did not observe a significant increase in the number of focal adhesions in the 501Mel cells that gained MITF upon vemurafenib treatment (Figure 6c,e, Figure 6-figure supplement 2a, b). This highlights the role of MITF as a mediator of focal adhesion formation. However, how the synergistic effects of MITF and vemurafenib on focal adhesion formation are mediated is unclear. One way to explain an increase in the formation of focal adhesions is that it is due to integrin clustering that is essential for the activation of focal adhesion pathways (Harburger and Calderwood, 2009;Humphries et al., 2006). We observed an increase in the expression of several integrins including ITGA1, ITGA2, ITGA6, ITGA10, and ITGB3 in the MITF-KO cells, as well as in the siMITF 501Mel and SkMel28 cell lines (Supplementary file 5). In addition to this, the FLT1 receptor tyrosine kinase (VEGFR1) and its ligand VEGFA, which activate a pathway that phosphorylates FAK, a key mediator of focal adhesions, were increased in expression. Interestingly, both FLT1 and VEGFA have MITF-binding sites in their promoters and MITF has previously been shown to regulate VEGFA expression (Louphrasitthiphol et al., 2019). Exposure of melanoma cells to BRAF and MEK inhibitors has been shown to slow growth and result in increased expression of NGFR and ECM and focal adhesion genes (Fallahi-Sichani et al., 2017). Consistent with these findings, we observed an up to 200-fold induction of the NGFR transcript in the MITF-KO cells compared to EV-SkMel28 cells, and we identified an MITF peak in the 3'UTR of NGFR in both the MITF CUT-and-RUN data from SkMel28 cells and in the MITF ChIP-seq data from COLO829 cells (Webster et al., 2014); expression of the melanocyte differentiation marker and MITF target MLANA was 50-to 80-fold reduced in the MITF-KO cells (Figure 2-figure supplement 2a-d). Thus, it is possible that MITF affects focal adhesions by both directly regulating expression of genes involved in the process and indirectly by activating the expression of signaling processes involved. Importantly, we found that cell survival of MITF-KO cells might be dependent on FAK pathway (Figure 6g,h); therefore, this might be the therapeutic vulnerability of MITF-low melanoma cells and can potentially enhance the current treatment options of melanoma. Upon MITF loss, an EMT-like process has been described to be involved in driving drug resistance in melanoma (Denecker et al., 2014;Caramel et al., 2013). In addition, the degree of plasticity between EMT and mesenchymal to epithelial transition (MET) has been suggested to lead to high metastatic potential as well as therapeutic resistance (Stylianou et al., 2019;Pastushenko et al., 2018;Thompson and Nagaraj, 2018). Indeed, we observed changes in important EMT markers and regulators such as ZEB1, CDH1 (E-Cadherin), CDH2 (N-Cadherin), SNAI2, and TGFß1 in the MITF-KO cells (Figure 4b-e) as well as in TCGA melanoma samples. Also, the MITF-KO cells showed increased expression of SOX2, which is important for neuronal stem cell maintenance and has been suggested to be important for self-renewal of melanoma tumor cells (Taranova et al., 2006;Santini et al., 2014;Figure 4b). Importantly, the effects of MITF on the expression of E-Cadherin, N-Cadherin, and ECM genes (ITGA2 and SERPINA3) is reversible (Figure 5e-j). This suggests that MITF enables epithelial to mesenchymal plasticity (EMP) that allows the formation of a hybrid state between EMT and MET to enforce the aggressiveness of melanoma. The binary effects of MITF on the expression of EMT genes may be the molecular mechanism that explains its rheostat activity. The minimal residual disease (MRD) is a major reason for relapse in cancer. We found that DMITF-X6 cells are positively correlated with the gene signature of a population of MRD cells in melanoma tumors as determined by single-cell RNA-seq of human PDX samples and zebrafish melanoma models ( Figure 6-figure supplement 2c-e; Figure 2c; Rambow et al., 2018). This gene signature was different from the 'invasive gene signature' that the authors observed in a different set of melanoma cells (Rambow et al., 2018). Interestingly, the MRD melanoma cells in zebrafish express little to no MITF protein and have increased expression of ECM genes ( Figure 6-figure supplement 2e). This suggests that the induced expression of ECM genes and low expression of MITF is one of the markers of MRD in melanoma. Thus, permanently losing MITF reprograms gene expression toward the drug-resistant state, suggesting that MITF-KO cells can be a tool to study drug resistance in melanoma. In the absence of MITF, melanoma cells may become MRD cells by reshaping their ECM, enhancing their attachment to the surface, thus forming quiescent cells which wait for an opportunity to change their phenotype and re-emerge as proliferative melanoma cells. Since melanoma cells can mediate these effects on their own, in the absence of the tumor microenvironment, this suggests that this process is cell-autonomous and under the direction of MITF which instructs the cells to create their own microenvironment. Generation of MITF-KO cells and validation of mutations using Sanger sequencing and whole genome sequencing The CRISPR/Cas9 technology was used to generate knock out mutations in the MITF gene in SkMel28 cells. These cells carry the BRAF V600E and p53 L145R mutations (Leroy et al., 2014). Guide RNAs (gRNAs) were designed targeting exons 2 and 6 of MITF, both of which are common to all isoforms of MITF; exon 2 encodes a conserved domain of unknown function as well as a phosphorylation site, whereas a portion of exon 6 and the entire exon 7 encode the DNA-binding domain of MITF (Figure 1a). The gRNAs used were: AGTACCACATACAGCAAGCC (Exon2-gRNA); AGAGTC TGAAGCAAGAGCAC (Exon6-gRNA). The gRNAs were cloned into a gRNA expression vector (Addgene plasmid #43860) using BsmBI restriction digestion. The gRNA vectors were transfected into SkMel28 melanoma cells together with a Cas9 vector (a gift from Keith Joung) using the Fugene HD transfection reagent (#E2312 from Promega) at a 1:2.8 ratio of DNA:Fugene. After transfection, the cells were cultured for 3 days in the presence of 3 mg/ml Blasticidin S (Sigma, stock 2.5 mg/ml) for selection and then serially diluted to generate single cell clones. As a result, we obtained the DMITF-X2 cell line from targeting exon 2 of MITF and the DMITF-X6 cell line from targeting exon 6. The respective control cell line, termed EV-SKmel28, was generated by transfecting the cells with empty vector Cas9 plasmid. Genomic DNA was isolated from the MITF knock out cell lines using the following procedures: Cells (~2Â105) were trypsinized and spun down and the supernatant was removed. The cell pellet was resuspended in 25 mL of PBS. Then 250 mL Tail buffer (50 mM Tris pH8, 100 mM NaCI, 100 mM EDTA, 1% SDS) containing 2.5 mL of Proteinase K (stock 20 mg/mL) were added to the cell suspension in PBS and incubated at 56˚C overnight. Then 50 mL of 5M NaCl were added and mixed on a shaker for 5 min and spun at full speed for 5 min at room temperature. The supernatant was then transferred into a new tube containing 300 mL isopropanol, mixed by inversion and spun in a microfuge for 5 min at full speed. The resulting pellet was washed with 70% ethanol and the pellets airdried at room temperature. Finally, the dried pellets were dissolved in nuclease free water for at least 2 hr at 37˚C. The appropriate regions (exons 2 or 6) of MITF were amplified using region-specific primers (MITF-2-Fw: CGTTAGCACAGTGCCTGGTA, MITF-2-Rev: GGGACAAAGGCTGGTAAA TG; MITFexon6-fw: GCTTTTGAAAACATGCAAGC, MITFexon6-rev: GGGGATCAATTCTCCCTCTT). The amplified DNA was run on a 1.5% agarose gel, at 70V for 60 min. The bands were cut out of the gel and extracted using Nucleospin Gel and PCR Cleanup Kit (#740609.50 from Macherey Nagel). The purified DNA fragments were cloned into the puc19 plasmid and 10 colonies were picked for each cell line, DNA isolated and sequenced using Sanger sequencing. Whole genome sequencing was performed using total genomic DNA isolated from the EV-SkMel28 as MITF-KO cells using the genomic isolation procedure above. Whole-genome sequencing was performed as described in Jó nsson et al., 2017 using illumina TrueSeq methodology to an average genome wide coverage of 37x. Sequencing results were analyzed using R package CrispRVariant (Lindsay et al., 2016) in Bioconductor to quantify mutations introduced in the MITF-KO cell lines. Generation of plasmids for stable doxycycline-inducible MITF knockdown and overexpression cell lines The piggy-bac transposon system was used to generate stable inducible MITF knockdown cell lines. The inducible promoter is a Tetracyclin-On system, which is called reverse tetracyline-transactivator (rtTA). This system allows the regulation of expression by adding tetracycline or doxycycline to the media. We used a piggy-bac transposase vector from Dr. Kazuhiro Murakami (Hokkaido University) (Magnúsdó ttir et al., 2013). The microRNAs targeting MITF (Supplementary file 6) were cloned into the piggy-bac vector downstream of a tetracycline response element (TRE). First, we used BLOCK-iT RNAi designer to design microRNAs targeting MITF (exons 2 and 8 of MITF), including a terminal loop and incomplete sense targeting sequences that are required for the formation of stem loop structures (Supplementary file 6). To obtain short double-stranded DNAs with matching BsgI overhangs, the mature miRNAs were denatured at 95˚C, then allowed to cool slowly in a water bath for annealing. Then the piggy-bac vector pPBhCMV1-miR(BsgI)-pA-3 was digested with BsgI (#R05559S, NEB) and the digested vector excised from a DNA agarose gel and the DNA purified. Following this, the annealed primers and purified digested vector were ligated at a 15:1 insert to backbone molar ratio using Instant Sticky-end Ligase Master Mix (M0370S, NEB). A non-targeting control (miR-NTC) was used as a negative control. The ligation products were then transformed to high-competent cells, clones isolated and plasmid DNA sequenced to verify the successful ligation. For the generation of piggy-bac plasmids containing MITF-M-FLAG-HA and control with only FLAG, we amplified MITF-M cDNA and FLAG sequence from the p3XFLAG-CMV-14 plasmid expressing mouse Mitf-M using the primers listed in Supplementary file 6 (pB-MITF-M-FLAG-HA), and then introduced it into the piggy-bac vector by restriction digestion with EcoR I and Spe I. RNA isolation, cDNA synthesis, and RT-qPCR Cells were grown in 6-well culture dishes to 70-80% confluency and RNA was isolated with TRIzol reagent (#15596-026, Ambion), DNase I treated using the RNase free DNase kit (#79254, Qiagen) and re-purified with the RNeasy Mini kit (#74204, Qiagen). The cDNA was generated using High-Capacity cDNA Reverse Transcription Kit (#4368814, Applied Biosystems) using 1 mg of RNA. Primers were designed using NCBI primer blast (Supplementary file 6) and qRT-PCR was performed using SensiFAST SYBR Lo-ROX Kit (#BIO-94020, Bioline) on the BIO-RAD CFX38 Real-time PCR machine. The final primer concentration was 0.1 mM and 2 ng of cDNA were used per reaction. Quantitative real-time PCR reactions were performed in triplicates and relative gene expression was calculated using the D-DDCt method (Livak and Schmittgen, 2001). The geometric mean of b-actin and human ribosomal protein lateral stalk subunit P0 (RPLP0) was used to normalize gene expression of the target genes. Standard curves were made, and the efficiency calculated using the formula E = 10 [-1/slope]. Immunostaining Cells were seeded on 8-well chamber slides (#354108 from Falcon), grown to 70% confluency and then fixed with 4% paraformaldehyde (PFA) diluted in 1xPBS for 15 min. After washing three times with PBS and blocking with 150 mL blocking buffer (1x PBS + 5% Normal goat serum + 0.3% Triton-X100) for 1 hr at room temperature, cells were stained overnight at 4˚C with the appropriate primary antibodies diluted in antibody staining buffer (1xPBS + 1% BSA + 0,3% Triton-X). The wells were washed three times with PBS and stained for 1 hr at room temperature with the appropriate secondary antibodies, diluted in antibody staining buffer. The wells were washed once with PBS, followed by DAPI staining at a final concentration of 0.5 mg/mL in 1x PBS (1:5000, #D-1306, Life Technologies) and two additional washes with PBS. Subsequently, wells were mounted with Fluoromount-G (Ref 00-495802, ThermoFisher Scientific) and covered with a cover slide. Slides were stored at 4˚C in the dark. BrdU assay and FACS analysis Cells were grown on 6-well plates overnight and treated with a final concentration of 10 mM BrdU for 4 hr. The cells were trypsinized and washed with ice cold PBS and then fixed with 70% ethanol overnight. Next, the cells were centrifuged at 500 g for 10 min and then permeabilized with 2N HCl/ Triton X-100 for 30 min followed by neutralization with 0.1 M Na2B4O7.10 H2. Cells were analyzed on a FACS machine (Attune NxT, Thermo fisher scientific) and data were analyzed using FlowJo software. IncuCyte live cell imaging Cells were seeded onto 96-well plates in triplicates supplemented with 200 mL medium with 10% FBS at a density of 2000 cells per well. Images were recorded with the IncuCyte system at 2 hr intervals for a 4-day period. Images were taken with 10x magnification and four images were collected per well. Collected images were then analyzed using the IncuCyte software by measuring cell confluency. Relative confluency was calculated by dividing the confluency at the subsequent hours by the confluency of the initial hour. Wound scratch and transwell invasion assay A total of 2 Â 10 4 cells were seeded per well of 96-well plate (Nunclon delta surface, Thermo Scientific, #167008) to reach confluent monolayer. Scratches were made with Woundmaker 96 (Essen, Bioscience) and imaging was performed with IncuCyte Live Cell Imaging System (Essen, Bioscience). The recorded images of the scratches were analyzed with IncuCyte software to quantify gap closure. For invasion assay transwell chambers with 8 mm pore size (Thermo Scientific Nunc) were coated with matrigel matrix from Corning (Thermo Scientific). Then cell suspension of 1 Â 10 5 /300 mL in RPMI 1640 supplemented with 0.1% FBS was added to the matrigel coated upper chamber and the medium containing 10% FBS was added to the lower chamber as a chemoattractant. Cell were allowed to invade for 48 hr after which the cells which migrated to the other side of the membrane were fixed with 4% PFA and stained with DAPI. Images were acquired using QImaging (Pecon, software Micro-Manager 1.4.22) with 10x magnification, and the cells were counted using Image J software. RNA sequencing and data analysis We isolated total RNA as described above from EV-SkMel28 and DMITF-X6 cell lines and assessed RNA quality using Bioanalyzer. An RNA integrity (RIN) score above eight was used for generating RNA libraries. The mRNA was isolated from total 800 ng RNA using NEBNext Poly(A) mRNA isolation module (E7490, NEB). The RNA was fragmented at 94˚C for 16 min in a thermal cycler. Purified fragmented mRNA was then used to generate cDNA libraries for sequencing using NEBNext Ultra Directional RNA library Kit (E7420S, NEB) following the protocol provided by the manufacturer with these modifications: Adaptors were freshly diluted 10X before use. A total of 15 PCR cycles were used to amplify the library. A total of 8 RNA libraries were prepared with four biological replicates for each cell line including EV-SkMel28 and DMITF-X6 cells. Purified RNA sequencing libraries were paired-end sequenced with 30 million reads per sample. Transcript abundance was quantified with Kallisto (Bray et al., 2016) and index was built with the GRCh38 reference transcriptome. Differential expression analysis was performed using Sleuth (Pimentel et al., 2017) to assess differentially expressed genes between EV-SkMel28 versus DMITF-X6. Both likelihood ratio test (LRT) and Wald test were used to model differential expression between DMITF-X6 and EV-SkMel28 cells. LRT test is more stringent when estimating differentially expressed genes (DEGs), whereas Wald test gives an estimate for log fold change. Therefore, results from LRT test was intersected with Wald test to get significant DEGs with fold change included. We selected differentially expressed genes with the cutoff of |log2 (foldchange)|!1 and qval <0.05. Functional enrichment analyses (GO terms and KEGG pathway) were performed using Cluster profiler in the Bioconductor R package using Benjamin-Hochberg test with adjusted p value <0.05 as a cut-off (Yu et al., 2012). Gene set enrichment analysis was performed using GSEA software from the Broad Institute (Subramanian et al., 2005). GSEA software was employed with pre-ranked options and gene lists were provided manually to assess enrichment. Differentially expressed genes were ranked combining p-value with log fold change for the input of set enrichment analysis. Analysis of human melanoma tumor samples from the Cancer Genome Atlas (TCGA) The quantified RNA-seq data from 473 melanoma samples were extracted from the Cancer Genome Atlas database using the TCGAbiolinks package in R Bionconductor (Colaprico et al., 2016). The lists of MITF low and MITF high samples were generated by sorting the samples based on MITF expression. The 30 tumor samples with the highest MITF expression and 30 tumor samples with the lowest MITF expression were selected for the downstream differential expression analysis built in the TCGAbiolinks package. Principal Component analysis (PCA) plots were generated using normalized count expression of the 200 most significantly differentially expressed genes between MITF low and MITF high samples and EV-SkMel28 and DMITF-X6 cells. CUT and RUN To identify direct MITF target genes, we performed anti-MITF Cleavage Under Targets and Release Using Nuclease (CUT and RUN) sequencing in SkMel28 cell lines as described (Skene and Henikoff, 2017) with minor modifications. Cells in log-phase culture (approximately 80% confluent) were harvested by cell scraping (Corning), centrifuged at 600 g (Eppendorf centrifuge 5424) and washed twice in calcium-free wash-buffer (20 mM HEPES, pH7.5, 150 mM NaCl, 0.5 mM spermidine and protease inhibitor cocktail, cOmplete Mini, EDTA-free Roche). Pre-activated Concanavalin A-coated magnetic beads (Bangs Laboratories, Inc) were added to cell suspensions (200 K cells) and tubes were incubated at 4˚C for 15 min. Antibody buffer (wash-buffer with 2 mM EDTA and 0.03% digitonin) containing anti-MITF (Sigma, HPA003259) or Rabbit IgG (Millipore, 12-370) was added and cells were incubated overnight at 4˚C on rotation. The following day, cells were washed in dig-wash buffer (wash buffer containing 0.03% digitonin) and pAG-MNase was added at a concentration of 500 mg/ mL. The pAG-MNase enzyme was purified following a previously described protocol (Meers et al., 2019). The pAG-MNase reactions were quenched with 2X Stop buffer (340 mM NaCl, 20 mM EDTA, 4 mM EGTA, 0.05% Digitonin, 100 mg/mL RNAse A, 50 mg/mL Glycogen, and 2 pg/mL sonicated yeast spike-in control). Released DNA fragments were Phosphatase K (1 mL/mL, Thermo Fisher Scientific) treated for 1 hr at 50˚C and purified by phenol/chloroform-extracted and ethanol-precipitated. CUT-and-RUN experiments were performed in parallel as positive control and fragment sizes analyzed using an 2100 Bioanalyzer (Agilent). All CUT-and-RUN experiments were performed in duplicate. Library preparation and data analysis CUT and RUN libraries were prepared using the KAPA Hyper Prep Kit (Roche). Quality control postlibrary amplification was conducted using the 2100 Bioanalyzer for fragment analysis. Libraries were pooled to equimolar concentrations and sequenced with paired-end 150 bp reads on an Illumina HiSeq X instrument. Paired-end FastQ files were processed through FastQC (Andrews, 2010) for quality control. Reads were trimmed using Trim Galore Version 0.6.3 (Developed by Felix Krueger at the Babraham Institute) and Bowtie2 version 2.1.0 (Langmead and Salzberg, 2012) was used to map the reads against the hg19 genome assembly. The mapping parameters were performed as previously described (Meers et al., 2019). The accession number for the CUT and RUN sequencing data reported in this paper is GSE153020. ChIP-Seq analysis of MITF public dataset Raw FASTQ files for MITF ChIP-seq were retrieved from GEO archive under the accession numbers GSE50681 and GSE61965 and subsequently mapped to hg19 using bowtie. Peaks were called using MACS, input file was used as control (p value <10e-05) and wig files were generated. Subsequently, wig files were converted to bedgraph using the UCSC tool bigWigToBedGraph with the following command line: wigToBigwig file.wig hg19.chrom.sizes output.bw -clip. The hg19 chromosome size file was downloaded from the UCSC genome browser. We used R package ChIPseeker (Yu et al., 2015) for annotation of ChIP-seq peaks to genes, plotting the distribution of peaks around TSS and a fraction of peaks across the genome. For motif analysis, MEMEChIP (Ma et al., 2014) was used by extracting DNA sequences corresponding to the peaks that were present in the induced and reduced DEGs of EV-SkMel28 vs. DMITF-X6 cells. Statistical analysis All statistical tests were performed using GraphPad-Prism, one-way or two-way ANOVA was performed and multiple correction was used as indicated in the figure legends.
2020-07-16T09:02:47.664Z
2020-07-15T00:00:00.000
{ "year": 2021, "sha1": "a1afb92195af0b959a28ab229d127716d55c2d9b", "oa_license": "CCBY", "oa_url": "https://doi.org/10.7554/elife.63093", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "86a8ab8191862747a2f1f72732d95cd18155eb0a", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry", "Biology", "Medicine" ] }
15918455
pes2o/s2orc
v3-fos-license
The effect of anakinra on retinal function in isolated perfused vertebrate retina Purpose Blockage of the interleukin 1 (IL-1) signaling pathway has been proposed for treatment of inflammatory disorders like those affecting the retina and its adjacent tissue. Herein, we evaluated one of those inhibitory drugs, anakinra (Kineret®), based on its safety profile with emphasis on retinal function from an electrophysiological point of view. Methods Bovine retina preparations were perfused with two different concentrations of anakinra (1 mg/ml and 2 mg/ml). An electroretinogram (ERG) was recorded and b-wave recovery assessed. Results Exposure to anakinra at a concentration of 1 mg/ml did not decrease the b-wave amplitude, whereas 2 mg/ml resulted in a significant reduction. Conclusions Based on these preliminary results, anakinra at a dose as low as 1 mg/ml could be regarded as safe for retinal function. However, dosages of 2 mg/ml and more do have toxic electrophysiological effects, at least for the short-term. Introduction Local and systemic inflammation is a hallmark of ocular diseases like uveitis, diabetic retinopathy, and age-related macular degeneration. 1 Various cytokines are up-regulated during the complex interaction of different cells and maintain the inflammatory milieu. 2 More and more cells become activated and through triggering of even more signaling cascades further proinflammatory cytokines and proteases are released, resulting in damage to surrounding tissue. Without timely and proper treatment, severe vision loss, which might be irreversible, is the consequence. One of the major cytokines involved in the pathogenesis of intraocular inflammation is interleukin 1 (IL-1). 3 Its downstream via the IL-1 receptor activates the nuclear factor NF-kB, which enhances the expression of proinflammatory genes including cytokines, chemokines, and adhesion molecules. Understanding these molecular mechanisms led to the invention of drugs like anakinra (Kineret ® ). Anakinra, which is an IL-1 receptor antagonist, is already approved for rheumatic arthritis and considered for even more inflammatory diseases. 4 While some studies exist in which anakinra is administered Conflict of interest: The authors state no conflict of interest. systemically to patients suffering from uveitis associated with Still's or Behcet's disease, local application to the eye has not been explored to a satisfactory extent, and research emphasizing ocular safety is still needed. 5,6 As we have previously shown the isolated superfused bovine retina is a good, sensitive tool for pharmacological testing. 7,8 We were able to demonstrate the effects of various drugs on retinal function, and our results could also be transferred to the human retina. Since many types of retinal cells and synapses are involved in the generation of the bwave, it might be obvious that a reduction of this parameter indicates substantial dysfunction of retinal neurons. Therefore, the b-wave amplitude of an electroretinogram (ERG) reflects a very sensitive indicator for overall retinal integrity and is a useful tool to investigate drug biocompatibility. The aim of the present study was to assess anakinra based on its safety profile with regard to effects on retinal function from an electrophysiological point of view. Methods Preparations were performed as previously described. 7e10 In brief, freshly enucleated bovine eyes were opened equatorially. The vitreous was removed, and a circular piece of the most central posterior segment was obtained using a 7 mm trephine. The retina was separated from underlying pigment epithelium and mounted on a mesh occupying the center of a perfusion chamber. ERG was recorded in the surrounding nutrient via two silver/silver-chloride electrodes on either side of the retina. The chamber was installed in an electrically and optically isolated air thermostat. The perfusion velocity was controlled by a roller pump (1 ml/min), and the temperature was kept constant at 30 C. The perfusion medium (NaCl 120, KCl 2, MgCl 2 0.1, CaCl 2 0.15, Na 2 HPO 4 13.5, and glucose 5 mmol/l) was preequilibrated with oxygen, and the oxygen level was monitored by a Clarke electrode. Retinas were dark-adapted, and ERGs were elicited at 5-min intervals using a single white flash for stimulation. The flash intensity was set to 6.3 mlx at the retinal surface using calibrated neutral density filters. The duration of light stimulation (500 ms) was controlled by a timer. ERGs were filtered and amplified (100-Hz high-pass filter, 50-Hz notch filter, 100.000Â amplification) using a Grass CP 511 amplifier. Data were processed and converted using an analog-to-digital (AD) data acquisition board (NI USB-6221; National Instruments, Austin, TX, USA) and a personal computer (PC). The ERGs were recorded and analyzed by DASYLab Professional Version 10.0.0 (National Instruments, Austin, TX, USA). The retina was stimulated repeatedly until stable amplitudes were reached. Then the retinal preparations (n ¼ 3 for each dosage) were superfused with agent medium for 30 min (retinal exposure time). Therefore, 50 mg or 100 mg anakinra (Kineret ® ) were dissolved in 50 ml standard medium. Afterwards the preparation was reperfused with drug free standard medium for at least further 30 min. The b-wave amplitude was measured from the trough of the a-wave to the peak of the b-wave, and the percentage of bwave reduction after exposition was calculated. Changes of the b-wave amplitude were carefully monitored. Furthermore, the reversibility of the drug impact on the b-wave after reperfusion with standard medium was considered. For statistical analysis of the b-wave amplitude, the software "Origin 6.0" (Microcal Inc, Northampton, MA, USA) was used. Experiments were replicated three times. Values are expressed as the mean ± standard deviation. Significance was calculated with a Student t test. Levels of P 0.05 were regarded as statistically significant. Results Stable ERG amplitudes were reached within 30 min of perfusion. Environmental parameters such as pH, osmotic pressure, temperature, and pO 2 remained unchanged during the whole experiment. In this model, exposure to 1 mg/ml anakinra did not decrease the b-wave amplitude significantly (À3.88% ± 5.38%; P ¼ 0.7855), whereas 2 mg/ml applied for 30 min led to a statistically significant reduction (À51.24% ± 31.71%; P ¼ 0.0247) of the amplitude (Fig. 1). Yet after reperfusion with standard nutrient solution, the bwave amplitude convalesced, and at the end of the washout, a partial recovery was seen, in which the b-wave amplitude was not significantly lower compared to the phase before exposure to anakinra (P ¼ 0.8831). Discussion The purpose of this study was to evaluate the effect of anakinra (Kineret ® ) in different dosages on retinal function. Anakinra is a recombinant human IL-1 receptor antagonist that is identical to the naturally occurring nonglycosylated form with the exception of one N-terminal methionine. 3,4 It is systemically used to treat rheumatoid arthritis and patients Fig. 1. Effects on anakinra on b-wave amplitude: Exposure to 1 mg/ml did not decrease the b-wave amplitude, whereas 2 mg/ml applied for 30 min led to a significant reduction (P ¼ 0.0247, Student t test). with neonatal-onset multisystem inflammatory disease (NOMID). Moreover, several trials are underway for various conditions underlying chronic inflammatory states such as cardiovascular disease, diabetes, and uveitis. 11e13 In all previous studies, anakinra was administered systemically. 5,6 However, data based on local intraocular application, which would be favorable in terms of uveitis because of the blood-retina barrier, are limited. It has been shown that intravitreal injected anakinra suppresses autoimmune uveitis in rats through decreased levels of IL-1 and even TNF-alpha. 14 In another animal study, 0.75 mg anakinra injected into the vitreous (0.05 ml > 15 mg/ml) of rats was able to successfully inhibit the growth of choroidal neovascular membranes in an experimental model of exudative age-related macular degeneration. 15 There are no published studies regarding ocular safety of anakinra. Our results suggest 2 mg/ml anakinra affect retinal function from an electrophysiological point of view at least temporary, while a lower dosage of 1 mg/ml might not. Considering that significant higher doses of anakinra were needed to have a significant anti-inflammatory effect in the animal studies mentioned before, a safe concentration with sufficient therapeutic outcome might not exist. Nevertheless, these results should be interpreted with caution. Despite the fact that our model mimics the retinal response to toxic agents quite properly, it remains an ex vivo model. Also, the safety of the drug in an ex vivo situation (cadaver eyes) may be different from an in vivo one (living eyes). In addition, these animal studies have shown that the beneficial effect of anakinra is only temporary and repeated injections may be needed to achieve the best therapeutic result. Though, repeated injections might lead to more side effects and long-term dysfunction, which is not covered by our experimental setup. Moreover, inflammatory responses are not captured by our system, but should be regarded utterly important in real life since they might affect the threshold of susceptibility to drug-mediated toxicity. 16 Also, this study is limited by the small sample size. In conclusion, the current data does not support the routine use of intravitreally administered anakinra in any retinal disease outside controlled trials. Further investigations, particularly dose-finding studies, will shed light on the potential therapeutic benefits for vascular and neovascular diseases of the choroid and retina.
2018-04-03T04:56:39.744Z
2017-01-12T00:00:00.000
{ "year": 2017, "sha1": "e2fcf181a09de738f2474fbd5bb112da745c099b", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.joco.2016.12.002", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e2fcf181a09de738f2474fbd5bb112da745c099b", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
20285292
pes2o/s2orc
v3-fos-license
Relating side chain organization of PNIPAm with its conformation in aqueous methanol Combining nuclear magnetic resonance (NMR), dynamic light scattering (DLS), and m s long all-atom simulations with two million particles, we establish a delicate correlation between increased side chain organization of PNIPAm and its collapse in aqueous methanol mixtures. We find that the preferential binding of methanol with PNIPAm side chains, bridging distal monomers along the polymer backbone, results in increased organization. Furthermore, methanol–PNIPAm preferential binding is dominated by hydrogen bonding. Our findings reveal that the collapse of PNIPAm is dominated by enthalpic interactions and that the standard poor solvent (entropic) effects play no major role. Introduction Polymer conformations in solvent mixtures often exhibit puzzling and paradoxical behavior.One such phenomenon is co-nonsolvency that occurs when two competing (miscible) good solvents for a polymer are mixed together, as a result the same polymer collapses within intermediate solvent-cosolvent mixing ratios in bulk [1][2][3][4][5][6][7][8][9] and near surfaces. 10,11One popular system that shows co-non-solvency is the conformational behavior of poly(N-isopropylacrylamide) (PNIPAm) in aqueous alcohol mixtures.Even when the phenomenon is usually associated with PNIPAm, [1][2][3][4] the name co-non-solvency was first coined when polystyrene chains were dissolved in a mixture of cyclohexane and DMF solution. 124][15][16][17][18] Furthermore, these systems present both LCST and UCST temperature effects.This suggests that the effect, unlike the common chemical believe, is not ''only'' restricted to polymers exhibiting LCST behavior and thus is independent of specific chemical details. The microscopic origin of the puzzling coil-globule-coil transition is a matter of intense debate.Here extensive experimental, 1-5,9 theoretical 6,8,[19][20][21] and computer simulations 4,7,8,22 studies have been performed.3][4][5]9 In some cases, also in conjunction with analytical theory 1,5 and parameter estimation for interaction strength. 9On the theory side, ever since the first theoretical work employing the Flory-Huggins theory, 1 several analytical works have been proposed to explain co-non-solvency based on the cooperativity effect, 6 particle based theory, 8 Flory-Huggins lattice model at high polymer concentrations, 20 and the off-lattice statistical model, 21 to name a few.Furthermore, former simulations are mostly limited to a few studies. 4,7,8In this context, using a semigrand canonical molecular dynamics approach, two of us have previously shown that PNIPAm has a significantly higher affinity (or preferential binding) towards methanol than towards water by a factor of B4k B T per monomer. 7This is further exemplified by potential of mean force (PMF) calculations, where a clear preferential interaction of methanol with PNIPAm was observed. 23This indicates that the polymer collapse in miscible good solvents, such as water and methanol for PNIPAm, is dictated by the relative (enthalpic) interaction strengths between methanol-PNIPAm and water-PNIPAm.Recently, it has been shown that the effect of co-non-solvency can also be observed in tertiary butyl alcohol in methanol-water mixtures, that is driven by enthalpic interactions. 24Furthermore, the enthalpically driven collapse of PNIPAm is against the common understanding of standard poor solvent collapse of LCST polymers, where solvent entropy gain plays a crucial role in polymer collapse.Furthermore, because a wide variety of polymers show the co-non-solvency phenomenon when dissolved in appropriate mixtures of solvents, a unified concept of co-non-solvency was proposed within a generic approach. 8,19Therefore, simple Lennard-Jones (LJ) interactions between monomer-(co)solvent are sufficient to explain the co-non-solvency effect at constant temperature 8,19 and more effects, 25,26 while ignoring all chemical details that often only contribute to a mere numerical prefactor.However, LJ representation of a (co)solvent bead may represent several methanol and/or water molecules. 8his journal is © The Royal Society of Chemistry 2016 What makes the co-non-solvency of PNIPAm an interesting and puzzling effect is that concepts known from the conventional polymer science often are insufficient to describe the phenomenon.For example: (1) a polymer collapses when the solvent quality remains good or even gets increasingly better by the addition of better cosolvent, 7,8 (2) standard poor solvent collapse or entropic effects are irrelevant, (3) the phenomenon is independent of any underlying LCST or UCST temperature behavior of polymers, [13][14][15][16][17][18][19] and ( 4) is driven by large (local) concentration fluctuations of different solvent components near the polymer, 19 making the mean-field description unsuitable.In this work, we revisit the phenomenon of co-non-solvency of PNIPAm in aqueous methanol mixtures by combining nuclear magnetic resonance (NMR), dynamic light scattering (DLS) and ms long, two million particles all-atom simulations.Note that we have used all-atom simulation instead of a semi-grand canonical setup. 7This is because in the case of a good solvent chain, the chain extension covers almost a full simulation domain consisting of B30 nm box boundary.To better correlate the different approaches, we match the polymer length N l to be similar, in the unit of persistence length l p , which was chosen as N l B 100l p .Our results provide experimental support for the claims presented in the four points mentioned above. The remainder of the paper is organized as follows: in Section 2 we briefly state the methodology for simulations, material synthesis and experimental measurements.Section 3 presents results and discussion.Finally we draw our conclusions in Section 4. Nuclear magnetic resonance measurements The H-NMR experiments 27,28 were measured with a 5 mm triple resonance TXI 1 H/ 13 C/ 15 N probe equipped with a z-gradient on an 850 MHz Bruker AVANCE III system.For proton spectra, 128 transients were used with a 9.5 ms long 901 pulse and a 17 600 Hz spectral width together with a recycling delay of 5 s.The temperature was regulated at 298.3 K and calibrated with a standard 1 H methanol NMR sample using the Topspin 3.1 software (Bruker).Temperature was controlled with a VTU (variable temperature unit) and an accuracy of AE0.1 K. Diffusion Ordered NMR Spectroscopy (DOSY-NMR) experiments were performed with a gradient strength of 5.350 G mm À1 on a Bruker Avance-III 850 NMR Spectrometer.The gradient strength of probes was calibrated by using a sample of 2 H 2 O/ 1 H 2 O at a defined temperature and compared with the theoretical diffusion coefficient of 2 H 2 O/ 1 H 2 O (values taken from Bruker diffusion manual) at 298.3 K. The diffusion delay time (D), Bruker term D 2 O was optimized for the TXI probe for 60 ms, while the gradient pulse length was kept at 1.6 ms.The optimization was realized by comparing the remaining intensity of the signals at 2% and 95% gradient strengths.The intensity loss of the echo was in the range of 90%.Using longer diffusion time causes a loss of signal intensity (from the echo) due to a short spin lattice relation time T 1 , which was determined with the inversion recovery method 27 before diffusion measurements. The diffusion measurements were done with a 2D DOSY sequence 29 by incrementing in 16 linear steps from 2% to 100% with the TXI probe.The calculation of the diffusion value was automatically done with the mono exponential function: 30 ln where I(G) and I(0) are the intensities of the signals with and without gradient, g the gyromagnetic ratio of the nucleus ( 1 H in these measurements), G is the gradient strength, d the duration of the pulse field gradient (PFG), D the diffusion value in m 2 s À1 and D the ''diffusion time'' between the beginning of the two gradient pulses.The relaxation delay between the scans was 3 s.The 2D sequence for diffusion measurement used double stimulated echo with three spoil gradients for convection compensation and with an eddy current delay of 5 ms for reduction 31,32 (acronym Bruker pulse program: dstebpgp3s).The spin-spin relaxation times T 2 were obtained via the CPMG method (Carr-Purcell-Meiboom-Gill) 33,34 using eight different increments.In the experiment, the time between the inversion (1801 pulse) and the read pulse (901 pulse) is incremented in eight steps and the quantitative analysis of the eight integrals (from the eight increments) exponentially fitted. 2.2 Size exclusion chromatography and light scattering 2.2.1 Size exclusion chromatography.For relative molecular weight determinations, a PSS SECcurity Agilent 1260 Infinity Setup (Polymer Standards Service GmbH (PSS)) was used, including a column set from PSS (2Â GRAM 1000, 1Â GRAM 100, particle size 10 mm) maintained at 60 1C, a UV (270 nm) and an RI detector.The eluent was DMF (containing 1 gL À1 LiBr) with a flow rate of 1 mL min À1 .The RI detector signal was used and the molecular weights are relative to linear polystyrene (PS) standards provided by Polymer Standards Service (PSS).Relative M w values agreed within 5% (high M w range) to 10% (low M w range) with absolute M w values obtained by static light scattering (SLS). 2.2.2Light scattering measurements.All light scattering experiments were performed on a commercially available instrument from ALV GmbH consisting of an electronically controlled goniometer and an ALV-5004 multiple tau full-digital correlator (320 channels).A HeNe laser with a wavelength of 632.8 nm and an output power of 25 mW (JDS Uniphase, Type 1145P) was utilized as a light source.All solutions were filtered through Millex-LCR 0.45 mm filters (Merck Millipore), directly into quartz light scattering cuvettes (inner diameter 18 mm), which were cleaned before in a Thurmond apparatus with distilled acetone.All light scattering measurements were carried out, similar to the NMR measurements, at a temperature of 25 1C.For the dynamic light scattering (DLS) experiments, PNIPAm samples were dissolved at a concentration of 1 gL À1 in pure water and in pure methanol.The z-average diffusion coefficients were determined after angular dependent measurements and averaging the apparent diffusion coefficients.The refractive index increment of each sample in methanol was determined with a Michelson interferometer. PNIPAm samples and synthesis Three different PNIPAm samples were used for our experiments.These include a commercial PNIPAm from Sigma-Aldrich [cat # 535311-10G].Using light scattering and size exclusion chromatography, we estimate absolute molecular weight M w and poly dispersity (PDI) to be M w (SLS) = 293 kDa, M w (GPC) = 309 kDa, and PDI = 2.69.N l value is estimated using the equation N l = (M w /PDI)/M 0 , where M 0 is the NIPAm formula weight.This corresponds to N l = 962 (sample PNIPAM-963).Furthermore, we have prepared two PNIPAm samples by RAFT polymerization giving chain lengths bracketing that corresponding to N l B 100l p to better correlate all-atom simulations and experimental observations (details will be described in the next section). All-atom simulations We employ all atom molecular dynamics simulations using GROMACS package. 35We use the Gromos96 force field 36 for methanol, the SPC/E water model 37 and the force field parameters for PNIPAm are taken from ref. 4. The temperature is set to 298 K using velocity rescaling with a coupling constant 0.5 ps. 38The electrostatics are treated using Particle Mesh Ewald. 39The interaction cutoff is chosen as 1.0 nm.The time step for the simulations is set to 2 fs.Initial equilibration of every configuration is performed at ambient pressure for 20 ns, where the pressure coupling is done using a Berendsen barostat 40 with a coupling time of 0.5 ps.The final configuration of constant pressure simulations, with equilibrated density, is used for canonical simulations. We choose a PNIPAm chain of length N l = 256 that corresponds to B100l p , with l p being the persistence length of a PNIPAm chain.Note that an atactic PNIPAm chain has l p B 2-3 monomers.We choose five different methanol mole fractions x m in MD simulations.In Table 1 we present our system parameters.The production runs are performed for at least 1 ms long MD trajectory each.During the production run observables such as gyration radius R g and structure factor S(q) are calculated.In Fig. 1 we present the time evolution of R g for three different x m .It can be seen that the structure is rather stable over long simulation time scales. PNIPAm conformation in aqueous methanol revisited We first revisit PNIPAm conformation in water and methanol mixtures.In Fig. 2 we show R g as a function of x m .We have also included the data from the earlier generic simulations corresponding to N l = 100l p . 8It can be appreciated that, by matching l p between the all-atom and previous generic simulations, we obtain a very good (almost quantitative) agreement.Furthermore, Fig. 2 also suggests that 1s B 1 nm.This can be rationalized as follows: R h is then translated into R g using the expressions R g = 1.5R h for coil conformations and R g = (3/5) 1/2 R h for globule. 43,44Note that we present experimental data of R g for the well defined collapsed structures and expanded chain conformations.Near the transition region 0.25 o x m o 1.0, we do not present data because of the nontrivial relation between R h and R g . Having shown the results for the collapse-swelling-collapse transition, we now want to briefly explain the origin of this reentrant transition.In this context, it was previously shown that the initial collapse at lower x m values are due to preferential binding of methanol molecules with the PNIPAm chain. 7Therefore, when a small amount of methanol molecules is added, these molecules try to bind to more than one monomer acting as sticky contacts between distal monomers along the polymer backbone.This tendency leads to the formation of segmental loops along the backbone initiating the process of polymer collapse.When the x m exceeds a certain concentration, so that the system can overcome the solvent translational entropy, the polymer re-opens after complete decoration of the polymer with methanol molecules.In this case, conformational entropy contributes to a logarithmic correction 19 and, therefore, only leads to a weak effect on describing the overall phenomenon, which is otherwise dominated by enthalpy.The methanol molecules forming enthalpically driven sticky contacts between distal monomer units were termed as bridging cosolvent and their fractions defined as f B . 8 The analytical expression for f B 19 can be converted into gyration radius using an argument using the formulation 19 based on, 43,44 where R 0 g is the gyration radius for x m = 0.Here V is the magnitude of the negative excluded volume À|V|, which can be estimated from the simulations and analytical theory. 8Note that f B gives the direct measure of V using the relation V = 100f B (x c ).Furthermore, an analytical expression can be derived for f B using a Langmuir like adsorption isotherm taking into account the competitive displacement of both solvent and cosolvent. 8In Fig. 2 we also include R g estimated using eqn (2).It can be seen that the experimental, all-atom simulation and generic simulation data can be well described by the analytical theory.Suggesting a reasonable correlation between the phenomena of co-non-solvency and the bridging scenario proposed earlier. 8ig. 3 shows a scaling plot of R g obtained from all-atom simulations and from NMR and DLS measurements of the three polymer samples included in this work.For comparison we have also included data taken from the published literature. 3,7It can be appreciated that the data from different N l obtained from different methods falls within the universal scaling law, further showing a nice quantitative agreement between different methods.For the collapsed structure at x m = 0.1 and 0.25, we present the data calculated using nuclear magnetic resonance (NMR).For comparison, we also include data from our previous generic simulations 8 with N l B 100l p and an analytical expression presented in eqn (2).Arrows are the indicative of the corresponding y-axis of the corresponding data set. Good solvent collapse One of the most intriguing features of this phenomenon is that PNIPAm collapses in good solvent.This makes the polymer conformation completely decoupled from the thermodynamic solvent quality (see chemical potential data in ref. 7).Therefore, we expect the chains to maintain a self avoiding walk statistics between the bridging points, i.e. an exponent of n = 3/5.A quantity that best describes the polymer conformation is the static structure factor S(q), which usually requires very long simulation trajectories and also long chain lengths.In this context, thus far all-atom simulations are mostly limited to oligomer systems (consisting of N l = 20-40 to 8-16l p ).In this work, we calculate S(q) for a collapsed chain at x m = 0.1 over last 0.5 ms long all-atom MD run for N l = 256 B 100l p .In Fig. 4 we show S(q), which shows a cross over from an approximate q À5/3 scaling between 4 nm À1 r q r 10 nm À1 (or corresponding length scale of 1.6 nm Z l Z 0.62 nm) to a q À4 scaling for q r 4 nm À1 .This suggests that -while a PNIPAm chain remains globally collapsed, it consists of rather large good solvent blobs with typical sizes of l B 1.3 nm. Folded PNIPAm structure and side chain organization We have used solution NMR to monitor side chain dynamics of PNIPAm as a function of x m (predeuterated MeOD in D 2 O).Typical 1 H NMR spectra are presented in Fig. 5.All the signals decrease in area and broaden with increasing x m between 7.5% r x m r 37.6% and then again increases when x m Z 37.6%.The decrease in area and broadening of the signals indicate slowing dynamics, which can be reflected in spin-spin relaxation times T 2 , as also reported by others as a means to map out polymer collapse. 48The T 2 values for the H a signal plotted in Fig. 6, as a function of x m , give the expected clear signature of the coilglobule-coil scenario as a result of the side chains becoming less mobile when the polymer is in the collapsed state. The data from the NMR experiments also suggests that the side group rigidity of a PNIPAm chain is because of the fact that a PNIPAm collapses in a way that the inner core of the folded structure is occupied by the side groups.MD simulations also support this claim for a collapsed structure.In this context, as described earlier, when there is preferential binding of PNIPAm with methanol, it is expected to observe methanol encapsulation within a collapsed structure of PNIPAm.Therefore, it is important to monitor if the side chain rich inner core also has methanol molecules sitting in between.For this purpose, we have performed a separate set of NMR experiments to identify preferential binding of the methanol molecules with PNIPAm chains. Fig. 4 Static structure factor S(q) of a collapsed PNIPAm structure at x m = 0.1.For the calculations of S(q) we only consider the alkane backbone of a PNIPAm chain. Preferential adsorption of methanol on PNIPAm and solvent intake To experimentally observe the preferential interaction of the PNIPAm with methanol, we have used a concept where preferential PNIPAm-methanol binding 7 can lead to local aggregation of methanol near the polymer structure that leads to depletion away from the polymer, see a schematic in Fig. 7(a).For this purpose we have prepared a NMR tube separated into two compartments by a membrane filter, see Fig. 7(b).Both compartments contain 15% aqueous methanol, and 1.188 g PNI-PAM was added to the upper compartment.This essentially becomes an osmosis experiment, with quantitative 1 H-NMR spectra collected to monitor changes in solvent composition in the lower compartment, from which solvent uptake by the PNIPAM can be estimated.For all different time dependent proton spectra, the same phase correction, baseline and integration parameters (integral width for the proton and methyl group of methanol) were applied. In Fig. 7(c) we present the amount of depleted methanol in the lower panel of the NMR tube in Fig. 7(b) that is engulfed by the PNIPAm sample in the upper panel of the NMR tube.The measurements were conducted over sixteen days.It can be seen that the methanol increases by B3% in the polymer system within the first 3-4 days.Beyond 4 days, solvent intake data shows a plateau suggesting no evaporation of methanol from the airtight experimental setup.Note that a rather large polymer concentration is needed in the upper panel of Fig. 7(b) to observe any significant solvent intake.The solvent intake, as observed in our NMR experiments, further supports our earlier claim that preferential adsorption of the methanol with PNIPAm drives the polymer collapse. 7This scenario can be further validated by looking into the potential of mean force (PMF) between PNIPAm-methanol and PNIPAm-water calculated using the umbrella sampling. 23,45,46n Fig. 8 we show PMF between different PNIPAm-(co)solvent pairs, which show a clear signature of preferential enthalpic interactions between PNIPAm-methanol.Furthermore, in a given polymer, the energy density within the solvation volume is dictated not only the interaction energy, but also by the sizes of the solvent (methanol and water in this case).Therefore, it should still be mentioned that the enthalpic interactions (or bridging) are usually not given by a single methanol molecule, rather a few collectively lead to sticky contacts. 8he window of PNIPAm collapse, or the LCST behavior, is strongly dependent on the temperature of the systems. 1,2Here, it is important to mention that -just because a polymer exhibits an LCST behavior, it does not suggest that the polymer collapse at a constant T should also be driven enhanced solvent entropy by the addition of methanol in water.Our arguments are based on the claim that the interaction asymmetry between PNIPAm-methanol and PNIPAm-water dictate PNIPAm collapse in aqueous methanol mixture. 8The smaller the asymmetry, the narrower the window of polymer collapse. 19To elucidate that a PNIPAm collapses because of interaction asymmetry, we have calculated PMF between NIPAm-water and NIPAm-methanol for T = 278 K.The data is shown in Fig. 9.As expected, at a reduced T the asymmetry in interactions also reduces, making the background binary fluid homogeneous for the polymer.This can not facilitate a polymer to collapse in a binary mixture.Interestingly previous experiments 1,2 have shown that the PNIPAm remains in coil state for T = 278 K, thus showing a nice correlation capturing the temperature effects between the earlier experiments 1,2 and our simulations. Mechanism of polymer collapse Lastly we want to comment on the possible mechanism of cosolvent bridging leading to PNIPAm collapse in aqueous methanol mixtures.In this context, it is already presented in the previous section and consistent with previous work, 7 that there is a preferential binding of PNIPAm with methanol.To further illustrate the bridging scenario driven by preferential adsorption, we calculate the number of hydrogen bonds n H-bond /N l between a methanol molecules and a NIPAm monomer using all-atom simulations.In Fig. 10(a) we show n H-bond /N l as a function of x m .In the range 0.10 r x m r 0.25, NIPAm shows a strong tendency of hydrogen bonding with the methanol molecules that is evident from the maximum deviation from the expected linear behavior. 4Without attempting to describe any specific geometry or arrangement of H-bond donor and acceptors, we can assume the -OH end of the methanol points towards the PNIPAm amide linkage, such that the CH 3 of methanol now forms part of the local solvent accessible surface previously defined by the amide group.The sticky contacts could then be formed between these CH 3 groups and the isopropyl group of a distal NIPAm unit (see Fig. 10(b)), or between multiple CH 3 groups of bound methanol molecules (see Fig. 10(c)).Note that in Fig. 10(b) and (c), for simplicity of representation, we only highlight hydrogen bonds between hydrogen of methanol and the oxygen of NIPAm amide groups that we expect to be most dominant.However, it should be mentioned that there might be several more scenarios.For example, there is a possibilities of bonding between methanol oxygen and hydrogen of the amide group and methanol hydrogen with nitrogen of the amide groups.Furthermore, it should also be mentioned that a single CH 3 interaction with isopropyl group is Bk B T, which may not sound as a large enough interaction strength.However, these methanol mediated sticky contacts are each likely facilitated by a few methanol molecules making the sticky contact attraction strength of the order of several Bk B T and not only by a single methanol molecule, as simplified in the schematics shown in Fig. 10(b) and (c).Here we want to emphasize that the solvation properties are intimately linked to the energy density within the solvation volume.Hence, not only dictated by the individual interaction strengths, but also related to number of (co)solvent particles within the solvation volume.Therefore, if one can reduces the energy density within the solvation volume, such that the solvent-cosolvent interaction contrast also reduces, one should expect to see narrower window of collapse.Indeed, it had been experimentally observed that for larger alcohols (such as ethanol or propanol) window of collapse reduces by B30-40% in comparison to methanol. 47 Conclusions We have revisited the co-non-solvency of PNIPAm in aqueous methanol mixtures.For this purpose, we have combined nuclear magnetic resonance and dynamics light scattering experiments with the all-atom molecular dynamics simulations, complementing our earlier studies of generic simulations 8 and analytical theory. 19hese findings strongly support that the initial collapse at lower methanol concentration is due to the methanol-PNIPAm enthalpic (bridging) effects and the reopening at larger methanol concentrations is entropic.Furthermore, preferential PNIPAm-methanol binding leads to increased organization of the PNIPAm side chains, which is intimately linked to the global conformational behavior of PNIPAm in aqueous methanol mixtures. While we study a specific system of PNIPAm in aqueous methanol mixture, our proposed mechanism of PNIPAm collapse at lower methanol concentrations provide a natural explanation to other phenomena, such as the initial collapse of PNIPAm in aqueous urea mixtures. 48,49Where the collapse was proposed is initiated by hydrogen bonded bridging of urea molecules between two NIPAm monomers that are far along the polymer backbone. It should also be mentioned that when dealing with polymer physics and/or the thermodynamics of polymer solutions, two key considerations, out of several, are absolutely needed to make any reasonable comparison to experimental data: (1) a polymer should be studied and not an oligomer and (2) time scale of simulations compared to the polymer relaxation time.In this context, our simulations of ms long trajectory of a N l = 100l p chain consisting of two million particles is the largest all-atom simulations performed on this PNIPAm based systems.A reasonably good agreement between all-atom simulations and experiments complementing earlier generic simulations, suggests that the co-non-solvency phenomenon is indeed driven by enthalpy. 8 Fig. 1 Fig. 1 Time evolution of polymer gyration radii R g for three different methanol mole fractions x m .The results are shown for a chain length N l = 256 and at temperature T = 298 K. Fig. 2 Fig. 2 Gyration radius R g as a function of methanol mole fraction x m .Results are shown for R g obtained from all-atom simulations for a chain of length N l = 256 B 100l p and experimental measurements for N l B 207 B 83l p (sample PNIPAm-207).For the pure water (x m = 0.0) and pure methanol (x m = 1.0) we use the data obtained from dynamic light scattering (DLS).For the collapsed structure at x m = 0.1 and 0.25, we present the data calculated using nuclear magnetic resonance (NMR).For comparison, we also include data from our previous generic simulations8 with N l B 100l p and an analytical expression presented in eqn(2).Arrows are the indicative of the corresponding y-axis of the corresponding data set. Fig. 3 Fig. 3 Gyration radius R g as a function of chain length N l .Results are shown for coil (at x m = 1.0) and globule (at x m = 0.1) conformations.Data for R g are obtained from different experiments, simulations and also from published work from the literature, as described in the legend.Symbols are respective data and the lines are power law fits shown in the legend.For coil conformation R g p N l 3/5 and for globular conformations R g p N l 1/3 . Fig. 5 Fig. 5 Left panel shows schematic representation of a monomer of PNIPAm.In NMR experiments we identify the rigidity of H a and H b indicated in this schematic.Right panel presents nuclear magnetic resonance (NMR) spectra highlighting H a and H b hydrogens as indicated in the left panel.Results are shown for different methanol mole fractions x m , starting from pure water x m = 0.0 (or 0%) to pure methanol x m = 1.0 (or 100%).The signal around 3.75 ppm corresponds to H a and H b peak appears around 1.10 ppm.Note that for clear representation of the data, we have aligned the peak positions of H a .We have used Sigma Aldrich sample of PNIPAm-962. Fig. 6 Fig.6In-plane relaxation time T 2 of H a hydrogen as a function of methanol mole fractions x m .The data is obtained by integrating the intensity peak around 3.75 ppm in Fig.5. Fig. 7 Fig. 7 Part (a) presents a schematic representation of the concept where local aggregation of cosolvents near a polymer lead to depletion away from the macromolecular structure.Part (b) shows a schematic showing the 5 mm tube with external reference capillary, a sealing function and a paper filter in the center that separates PNIPAM from bulk aqueous methanol mixture at the bottom.Part (c) presents the amount of excess methanol molecules encapsulated by the PNIPAm collapsed sample as measured from the depletion of methanol in the lower part of the NMR tube.Here 3% of methanol is with respect to the initial 15% of the methanol content. Fig. 8 Fig. 8 Potential of mean force v(r) showing NIPAm-methanol and NIPAm-water interaction strengths for two different pressures.Simulations are performed at a temperature of 298 K and the data is taken from ref. 23. Fig. 10 Fig. 10 Part (a) shows number of hydrogen bonds n H-bond /N l between a methanol molecule and a monomer of PNIPAm as a function of methanol mole fraction x m .The dashed red line is the linear extrapolation.Data is shown for all-atom simulations of chain length N l = 256 and for the temperature T = 298 K. Parts (b) and (c) present schematic of two possible scenarios of bridging methanol molecules, which we expect to be most relevant. Table 1 System sizes for the simulations performed in this study.Here, N is the total number of solvent molecules, N w is the number of water molecules, N m is the number of methanol molecules, x m is the methanol mole fraction, L box the equilibrated box length, and chain gyration radius R g
2018-04-03T04:03:45.946Z
2016-09-28T00:00:00.000
{ "year": 2016, "sha1": "b5c11aee1f1d7f9bac64dfa9f703d077dd8acd9b", "oa_license": "CCBY", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2016/sm/c6sm01789d", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "dd49e9ae7a079d24c17cfef742f60d01395c6f16", "s2fieldsofstudy": [ "Chemistry", "Materials Science" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
248227771
pes2o/s2orc
v3-fos-license
Multi-messenger observations of binary neutron star mergers in the O4 run We present realistic expectations for the number and properties of neutron star binary mergers to be detected as multi-messenger sources during the upcoming fourth observing run (O4) of the LIGO-Virgo-KAGRA gravitational wave (GW) detectors, with the aim of providing guidance for the optimization of observing strategies. Our predictions are based on a population synthesis model which includes the GW signal-to-noise ratio, the kilonova (KN) optical and near-infrared light curves, the relativistic jet gamma-ray burst (GRB) prompt emission peak photon flux, and the afterglow light curves in radio, optical and X-rays. Within our assumptions, the rate of GW events to be confidently detected during O4 is $7.7^{+11.9}_{-5.7}$ yr$^{-1}$, 78% of which will produce a KN, and a lower 52% will also produce a relativistic jet. The typical depth of current optical electromagnetic search and follow up strategies is still sufficient to detect most of the KNe in O4, but only for the first night or two. The prospects for detecting relativistic jet emission are not promising. While closer events (within z<0.02) will likely still have a detectable cocoon shock breakout, most events will have their GRB emission (both prompt and afterglow) missed unless seen under a favorably small viewing angle. This reduces the fraction of events with detectable jets to 2% (prompt emission, serendipitous) and 10% (afterglow, deep radio monitoring), corresponding to detection rates of $0.17^{+0.26}_{-0.13}$ and $0.78^{+1.21}_{-0.58}$ yr$^{-1}$, respectively. When considering a GW sub-threshold search triggered by a GRB detection, our predicted rate of GW+GRB prompt emission detections increases up to a more promising $0.75^{+1.16}_{-0.55}$ yr$^{-1}$. INTRODUCTION The second generation of gravitational wave detectorsnow comprising the Advanced Laser Interferometer Gravitational wave Observatory (aLIGO, Aasi et al. 2015), Advanced Virgo (Acernese et al. 2015) and, starting with the third observing run O3, KAGRA (Aso et al. 2013) -led to a revolution in our capability to listen to the Universe, that started with the discovery of GW150914 (Abbott et al. 2016), the first compact binary coalescence (CBC) detected in gravitational waves (GWs). During the first three observing runs (O1, O2 and O3 - Abbott et al. 2019aAbbott et al. , 2021a, the network, operated by the LIGO, Virgo and KAGRA (LVK) Collaborations, detected a total of ninety events comprising signals from merging binary black holes (BHBH, the vast majority), binary neutron stars (NSNS, with only two confident identifications) and, remarkably, 2 black hole-neutron star (BHNS) coalescences (Abbott et al. 2021c,b). The latter two detections, performed during the second part of O3, marked the first ever observation of this new type of sources. So far, electromagnetic (EM) emission was observed only in association to the NSNS merger GW170817 (Abbott et al. 2017c). Thanks to Advanced Virgo joining the network shortly before, GW170817 was localized in the sky within an area of 28 deg 2 (at 90% credible level). Remarkably, the localisation was consistent with that of GRB170817A, a short gammaray burst (GRB) detected by Fermi and INTEGRAL (Abbott et al. 2017a) two seconds after the GW170817 chirp. Telescopes all over the world soon discovered an intrinsically faint, rapidly evolving optical/near-infrared transient in a nearby galaxy within the GW170817 localisation error box (Coulter et al. 2017), which was then spectroscopically classified (Pian et al. 2017) as a kilonova (KN), that is, quasithermal emission from the expanding ejecta produced during and after the merger, powered by the nuclear decay of heavy elements synthesized by rapid neutron capture. In the second week after the merger an additional, broadband (radio to X-rays), non-thermal source was detected at the same position: after a few months, the debate about the nature of the source was settled by very long baseline interferometry observations (Mooley et al. 2018;Ghirlanda et al. 2019), which provided conclusive evidence in support of its interpretation as the afterglow of a relativistic jet seen off-axis. The O3 observing run did not see any new EM counterpart detection, despite the significant increase in sensitivity. The EM follow up campaigns in response to potentially EMbright O3 events proved generally difficult, in some cases due to the poor sky localisation of the GW signal (e.g. in the case of GW190425, Abbott et al. 2020a) or to the relatively large distance (e.g. GW190814, for which the non-detection of an EM counterpart did not lead to strong constraints on the progenitor -see for example Ackley et al. 2020 -despite the good localisation and the massive observational effort). The next, year-long observing run O4 is currently planned 1 to start in December 2022. The improvements in the sensitivity of the LIGO Hanford and Livingston, Virgo and KAGRA (HLVK) interferometers will let us explore a wider volume of the Universe, with a large predicted increase in the detection rate with respect to O3 (Abbott et al. 2020b;Petrov et al. 2021). The optimisation of EM follow-up strategies will be fundamental in order to enhance the probability of discovering rapidly fading transients in association to these detections. Indications about the predicted GW and EM properties of the population accessible during O4 would be extremely valuable to this task (see for an application using the expected kilonova light curve range). In this Letter we present our predictions 2 for the observational appearance of the EM emission associated to NSNS mergers that will be detected during O4, focusing on KN and jet-related emission. To this purpose, we built a synthetic population of merging NSNS binaries, with a mass distribution informed by both GW and Galactic NSNS binaries (see Appendix A.1), and computed the expected properties of their ejecta and accretion disks through numerical-relativityinformed fitting formulae. Using these properties as inputs, we then computed the observable properties of their associated KN, GRB prompt and GRB afterglow emission through a suite of semi-analytical models, updating the methodology described in Barbieri et al. 2019b. This allowed us to construct the distributions of the EM observables for O4 GWdetectable events, and to address a number of fundamental questions, such as: do all detectable NSNS mergers produce an EM counterpart? Which counterpart is best detected in wide-area surveys or in targeted observations? How diverse is the kilonova emission in terms of brightness and other properties? How long after the merger do we expect the detection of most of the GRB afterglows in the radio, optical and X-ray bands? PROSPECTS FOR EM COUNTERPART SEARCH AND MONITORING IN O4 2.1. Multi-messenger observing scenarios and detection limits We consider two representative sets of detection limits (see Table 1) based on the typical depth that can be reached during an EM follow up in response to a GW alert. In particular, the 'counterpart search' set is representative of the search for an EM counterpart over the GW localization volume (or of online triggering algorithms in the case of space-based gamma-ray detectors), while the 'candidate monitoring' set consists of deeper limits typical of the monitoring of a candidate counterpart with arc-second localisation (or of off-line sub-threshold searches in gamma-ray detector data). In addition to discussing the expected rates of GW+EM events that exceed (some combinations of) these limits, we also briefly discuss the prospects for joint GW+EM detections in off-line sub-threshold searches in GW data triggered, for example, by a GRB detection by an EM facility (we call this a 'subthreshold GW search', see e.g. Abbott et al. 2017b). Given the typical expected GW localization areas and distances in O4 (Abbott et al. 2020b;Petrov et al. 2021), optical/infrared counterpart searches covering a significant fraction of the localization probability will be only feasible with large, wide-field telescopes or in a galaxy-targeted approach with smaller facilities. In both cases, the typical realistic depth of EM counterpart search observations is down to 21 -22 AB magnitudes in the J, z, g bands (e.g. Coughlin et al. 2019;Ackley et al. 2020), in part limited by the availability of deep templates. Radio telescopes with a sufficiently fast survey speed can conduct searches for an EM counterpart over a significant fraction of the GW error box (Dobie et al. 2022), either by means of an unbiased survey of the area or by preferentially targeting galaxies, realistically reaching detection limits of 0.1 mJy at a representative frequency of 1.4 GHz (e.g. Dobie et al. 2021;Alexander et al. 2021); X-ray searches have been attempted with the Neil Gehrlels Swift Observatory (Page et al. 2020) and typically reached a 10 −13 erg cm −2 s −1 keV −1 limiting flux at 1 keV. Despite not representing technically a search, we include in this category the gamma-ray sky monitoring by Fermi/GBM and Swift/BAT, with representative 64-ms peak photon flux limits of 4 and 3.5 ph cm −2 s −1 (these limits are based on a visual comparison of the flux distribution predicted by our model with those observed by these instruments, see Figure 11 in Appendix B.3.1). Once a promising candidate is localized with arc-second accuracy, longer exposures become feasible, and deeper limits can be reached: our deeper 'candidate monitoring' detection limit set assumes a detection to be possible down to 28 AB magnitudes in the J, z, g bands, representative of deep space-based observations or of ground-based ones with large adaptive-optics-equipped facilities (e.g. Lyman et al. 2018); in X-rays down to 10 −15 erg cm −2 s −1 keV −1 at 1 keV, representative of the limits that can be reached by Chandra or XMM-Newton with long ( 10 4 s) exposures (e.g. Margutti et al. 2017;D'Avanzo et al. 2018); in radio down to 10 µJy, representative of limits that can be reached after hour-long exposures with a large facility such as the Karl Jansky Very Large Array (e.g. Hallinan et al. 2017). We also include in this category the off-line, sub-threshold detection of gammaray emission by Fermi/GBM and Swift/BAT, with a representative flux limit of 1 ph cm −2 s −1 , for both. Based on the above considerations, we defined the set of representative detection limits given in Table 1. For the GW detection, we assumed a network signal-to-noise ratio (SNR) threshold SNR net ≥ 12 (see next sub-section for the definition) for a confident detection, and a less stringent SNR net ≥ 6 for the sub-threshold GW search. GW-EM population model Our synthetic cosmological population of NSNS mergers is characterized by power-law chirp mass and mass ratio probability distributions, assumed independent and fitted to currently available observational constraints from both GW-detected and Galactic NSNS binaries (see Appendix A.1). We assumed a cosmic merger rate density (Appendix A.2) obtained by convolving a simple P (t d ) ∝ t −1 d delay time distribution (here t d represents the delay between the formation of the binary and its GW-driven merger), with a minimum delay t d,min = 50 Myr, with the cosmic star formation rate from Madau & Dickinson 2014, and normalized (see Appendix A.3) to a local rate density R 0 = 347 +536 −256 Gpc −3 yr −1 to self-consistently reproduce the actual number of significant NSNS mergers observed so far (Abbott et al. 2021b) . For each event we computed the expected SNR in the LIGO 3 , Virgo 4 and KAGRA 5 detectors with the projected O4 sensitivities, adopting the TaylorF2 approximant from the software package pycbc to model the GW signal 6 (Abbott et al. 2020), and computed the network SNR as SNR net = SNR 2 i (where i runs over the detectors in the network and we assumed 80% duty cycle for each detector, in practice setting each single-detector SNR i to zero randomly with 20% probability). For all events in the population we then computed the expected ejecta mass, ejecta average velocity and accretion disk mass using numericalrelativity-informed fitting formulae Krüger & Foucart 2020;Radice et al. 2018) and assuming the SFHo equation of state (Steiner et al. 2013), which satisfies the current astrophysical constraints (e.g. Miller et al. 2019). This equation of state predicts a maximum non-rotating NS mass of M TOV = 2.06 M . We used the results as inputs to compute KN light curves from 0.1 to 50 days in the g (484 nm central wavelength), z (900 nm) and J (1250 nm) bands, using the multi-component model of Perego et al. 2017 with updates based on Breschi et al. 2021, see Appendix B.2 for more details. In cases of mergers with final mass M rem ≥ 1.2M TOV , corresponding to remnants that collapse promptly or after a short-lived hyper-massive neutron star phase to a black hole, we assumed the system to launch a relativistic jet, with an energy set by the mass of the accretion disk and the spin of the remnant (see Appendix B.3). In cases in which the jet energy exceeded a threshold defined following Duffell et al. 2018, we assumed the relativistic jet to be able to break out of the ejecta cloud and produce GRB prompt and afterglow emission (a 'successful jet'). In our population, 52% of the events launch a successful jet, satisfying the current observational constraints on the incidence of jets in NSNS mergers (Salafia et al. 2022). For these cases, we assumed a jet angular structure 7 inspired by GRB170817A , see Appendix B.3 for more details) and computed afterglow light curves from 0.1 to 1000 days in the radio (1.4 GHz), optical (g band) 8 and X-rays (1 keV), fixing the interstellar medium density at n = 5 × 10 −3 cm −3 (the median density in the Fong et al. 2015 sample) and the afterglow microphysical parameters at e = 0.1, B = 10 −3.9 and p = 2.15 (representative of GW170817, Ghirlanda et al. 2019). Given the uncertainty on the detailed physical processes involved in the GRB prompt emission, to compute its properties we adopted a semi-phenomenological model similar to that used in Barbieri et al. 2019b andSalafia et al. 2019, where a constant fraction η γ = 0.15 (Beniamini et al. 2016) of the jet energy density at each angle (restricting to regions with a bulk Lorentz factor Γ ≥ 10) is assumed to be radiated in the form of photons with a fixed spectrum in the comoving frame. inating KAGRA from the network would result in a decrease of the rates lower than 0.6%, making our assumption negligible. 7 Angular dependence of the jet energy density and bulk Lorentz factor. 8 We do not consider the dust extinction in computing the optical KN and GRB afterglow emission. The observed spectrum was then obtained by integrating the resulting radiation over the jet solid angle, accounting for relativistic beaming. To account for a putative wider-angle cocoon shock breakout component (Gottlieb et al. 2018), for systems observed within a viewing angle θ v ≤ 60 • we also included an additional emission component whose properties reproduce those observed in GRB170817A (Abbott et al. 2017a), namely a luminosity L SB = 10 47 erg/s and a cut-off power-law spectrum with νF ν peak photon energy E p,SB = 185 keV and low-energy photon index α = −0.62. The photon fluxes in the 10-1000 keV (Fermi/GBM) and 15-150 keV (Swift/BAT) energy bands were then computed assuming a fixed rest-frame duration T = 2 s for all bursts. We provide more details on the model in Appendix B.3.1. To compute the final GRB prompt emission detection rates we took into account the limited field of view and duty cycle of Fermi/GBM and Swift/BAT by multiplying the resulting rates by 0.60 and 0.11 respectively (Burns et al. 2016). Detection rates with the 'counterpart search' limit set In the left-hand panel of Figure 1 we show our predictions for the EM counterpart search scenario in O4, assuming the 'counterpart search' limits set. The light grey line ("All NSNS") represents the intrinsic cumulative merger rate, with the underlying light grey band showing its uncertainty (Poissonian uncertainty on the rate density normalization assuming our mass distribution, see Appendix A.3), which propagates as a constant relative error contribution to all the other rates shown in the figure. The black line ("HLVK O4") is our prediction for the cumulative detection rate of NSNS mergers by the GW detector network in O4. For comparison we also show, with a dark grey line, the rate assuming the HLV O3 configuration network and duty cycle (Abbott et al. 2021a). The blue, red and orange lines are the all-sky cumulative rates for the joint detection of GW and KNae ("Kilonova+O4"), GW and GRB afterglows ("GRB Afterglows+O4"), GW and GRB prompt emission ("GRB Prompt+O4"), respectively. For the latter we show the rate for a GRB detection by Fermi/GBM (dashed line) and, for comparison, the rate of a putative detector with the same sensitivity, but with an all-sky field of view and a 100% duty cycle (solid line). The result for Swift/BAT is reported in Table 1. The redshift (or luminosity distance) values at which the curves saturate clearly show that the horizons are currently set by the GW detection. We find that, in the O4 run, NSNS merger GW signals will be detectable out to ∼ 300 Mpc (z ∼ 0.07), with a detection rate of 7.7 +11.9 −5.7 events per calendar year. Joint GW+EM detection rates for the various counterparts considered are reported in Table 1. These rates show that the vast majority of KNae associated to O4 events will be brighter than the assumed limits at peak, and therefore in principle within the reach of current EM counterpart search facilities and strategies. As shown in Figure 2 and detailed in sec. 3.1, though, the extremely fast evolution of these sources will make their actual identification very challenging, and will require a coordinated global effort and the use of large facilities. Our predicted joint GW and GRB rates for EM searches are instead much lower (0.32 +0.51 −0.23 yr −1 for the GRB afterglow and 0.17 +0.26 −0.13 yr −1 for the GRB prompt 9 ), and they reflect the faintness of these components for the considered flux thresholds, mainly because of the large abundance (97%) of offaxis jets (i.e. with θ v ≥ 2θ c , where θ c is the core angle as defined in Appendix B.3). Detection rates with the 'candidate monitoring' limit set In the right-hand panel of Figure 1 we show the results for the scenario simulating the monitoring of a well-localized candidate and the GRB prompt sub-threshold detection, assuming the 'candidate monitoring' limits set. These rates represent the hypothetical maximum dection rates that can be achieved in the limiting situation in which all events are localised to arc-second accuracy, allowing for observations as deep as the assumed limits. The KN rate in this panel is therefore shown mostly for reference, as the most likely scenario is one in which the arc-second localisation is obtained through the identification of the KN in a shallower wide-area search. Still, given that all jet-producing events in our population also produce a KN, and given that almost all our KNae exceed the 'couterpart search' limit set, the rates reported for the afterglow in this panel do represent actual achievable rates. The light grey and black lines in the panel are the same as the left panel. The blue and red lines are the all-sky cumulative detection rates for the Kilonova+O4 and GRB After-glow+O4 detectable sources with this limit set. For the latter emission we report individually the rates of events exceeding the radio, optical and X-ray detection thresholds (solid, dashed and dotted line, respectively), showing radio to be the most promising band for the detection of a faint GRB afterglow counterpart. In this panel we also show with orange lines the rates of joint GW+GRB detections assuming a detection threshold (see Table 1) representative of an off-line sub-threshold search in the gamma-ray detector data. The fact that the deeper optical and infrared limits do not increase significantly the KN detection rate reflects the fact that the majority of KNae associated to O4 events in our population are already brighter (at peak) than the limits adopted in the search scenario. As far as the GRB afterglow is con- Table 1. Assumed detection limits and predicted detection rates in our observing scenarios. Below each rate we also report in parentheses the fraction over the total O4 NSNS GW rate ("HLVK O4"). The GW detection limits refer to the SNRnet threshold. Near infrared and optical limiting magnitudes are in the AB system; radio limiting flux densities are in mJy @ 1.4 GHz; X-ray limiting flux densities are in erg cm −2 s −1 keV −1 @ 1 keV; gamma-ray limiting photon fluxes are in photons cm −2 s −1 in the 15-150 keV (Swift/BAT) or 10-1000 keV (Fermi/GBM) band. Detection rates are in yr −1 . The reported errors, given at the 90% credible level, stem from the uncertainty on the overall merger rate (hence they cancel out in the fractions), while systematic errors are not included. cerned, we find that the deeper limits allow to increase the detection rate in the radio, optical and X-ray bands by factors of ∼ 3, 8.5 and 2, respectively, with the highest detection rate in the radio, reaching 0.78 +1.21 −0.58 yr −1 . Also for the GRB prompt sub-threshold detection we find a small increase in the rates up to 0.31 +0.48 −0.23 yr −1 . All detection rates are reported in Table 1. Sub-threshold GW search in response to an external EM trigger The bottom group of rows in Table 1 report the detection rates predicted by our model adopting a lower GW detection threshold SNR net ≥ 6, which we take as representative of a sub-threshold GW search for events coincident with an external EM trigger. The most relevant external trigger, in our context, is a GRB, as it allows for the search to focus on a short time interval and on a relatively small sky area, therefore increasing significantly the sensitivity with respect to an all-sky, all-time search (Abbott et al. 2017b). Thanks to the expanded GW horizon in the sub-threshold search, the rate of joint GRB+GW detections increases to a more promising 0.75 +1.16 −0.55 yr −1 (for Fermi/GBM), which is in good agreement with the rate predicted by the LVK Collaboration for the same kind of search (Abbott et al. 2021e) and would mean a relatively high chance of a new GRB-NSNS association. Sub-threshold searches may in principle be conducted also in response to the EM detection of a KN or GRB orphan afterglow candidate: for that reason, we also report the joint GW+KN and GW+afterglow rates in the table, but we caution that these are not representative of a real expected rate, as the serendipitous discovery of KNae and orphan GRB afterglows in current all-sky surveys is hampered by limited cadence, depth, and availability of time at large facilities for spectroscopic classification of candidates. EM PROPERTIES In the following section, we characterize the EM properties of the GW-detectable (with SNR net ≥ 12) NSNS mergers in our population. Our purpose is mainly that of informing EM follow up strategies, by constructing expected distributions of source brightness at various times and frequencies and for different EM counterparts. Kilonova In Figure 2 we show the time evolution of the distribution of KN brightness for binaries in our population that are GWdetectable in O4. In particular, in the left-hand panel we show the bands that contain 50%, 90% and 99% of the light curves at each time. Blue and red colors refer to the g and z band, respectively (we show the corresponding result for the J band in Figure 10 in Appendix B.2). When scaled to the median distance (∼ 176 Mpc) of these events, AT2017gfo (colored circles) lies at the top of the 50% band, showing that our assumptions are conservative in that they predict KNae that are on average slightly dimmer than AT2017gfo, but with a similar temporal evolution. While the peaks of these KNae span a relatively wide apparent magnitude range 17 − 24, 50% are concentrated in the relatively narrow interval 20 − 22. In the right-hand panel we show the cumulative apparent magnitude distributions at peak (solid line) and also 3 and 5 days after the merger (dashed and dotted lines), which clearly display the very rapid evolution, especially in the g band. Candidate Monitoring Limits Figure 1. Cumulative multi-messenger detection rates as a function of redshift (luminosity distance) for our NSNS population. The left-hand panel assumes the 'counterpart search' detection limits, representative of a search for an EM counterpart over the GW localization volume (see Tab. 1). The light grey line ("All NSNS") represents the intrinsic merger rate in a cumulative form, with the grey band showing its assumed uncertainty (Abbott et al. 2021d), which propagates as a constant relative error contribution to all the other rates shown in the figure. The black ("HLVK O4") and grey ("HLV O3") lines are the cumulative GW detection rates (events per year with network SNR ≥ 12, accounting for the single detector duty cycles) in O4 and O3. The blue ("Kilonova+O4"), red ("GRB Afterglow+O4") and orange ("GRB Prompt+O4") lines are the cumulative detection rates for the joint detection of GW and a KN, GRB afterglow or GRB prompt in O4 (in at least one of the considered bands, all-sky except for the dashed line, which accounts for the Fermi/GBM duty cycle and field of view). The right-hand panel assumes deeper detection limits (see Tab. 1) representative of the monitoring of a well-localized candidate (and a sub-threshold search for the GRB prompt). For the GRB afterglow we show separately the radio, optical and X-ray band detection rates (solid, dashed and dotted, respectively). GRB Afterglow In Figure 3 we show the properties of GRB afterglows associated to GW-detectable binaries in our population by showing the contours containing 50% (solid lines) and 90% (dashed lines) of GRB afterglow peaks on the (νL ν , t) plane, where L ν = 4πd L 2 F ν /(1 + z) is the specific luminosity, ν is the observer frequency and t is the observer time. The red, green and blue colors refer to our radio, optical and Xray bands, respectively. Most peak times are at 10 2 days (we note that we restricted the light curve computation between 10 −1 and 10 3 rest-frame days), with a tail at shorter peak times. We also show 500 randomly sampled optical light curves (thin grey lines) in the background, to help visualizing the underlying light curve behavior. For comparison, we also show GRB170817A data (Makhathini et al. 2021, small circles), whose peak lies within the 50% contours in all three bands. These results stem from the strong dependence of the GRB afterglow light curve on the viewing angle, combined with the GW-detection-induced bias on the viewing angle distribution (which skews the distribution towards smaller viewing angles with respect to the isotropic case, with a peak at ∼ 30 • -Schutz 2011). This places the majority of the peaks months to years after the GW event, with a small sub-sample peaking at early times (∼hours) in the optical and X-rays, producing very bright emission, thanks to a smaller viewing angle. GRB Prompt In Figure 4 we show the distribution of rest-frame spectral energy distribution (SED) peak energy E peak versus the isotropic-equivalent energy E iso of events for which both the GW signal and the GRB prompt emission meet our detectability criteria (considering the O4 HLVK network and Fermi/GBM, green filled contours), and separately those that are detectable in GW (black dashed contours) or by Fermi/GBM (magenta contours). In particular, different shades in the green regions progressively contain 50%, 90% The left-hand panel shows the apparent AB magnitude versus post-merger time for our simulated KN light curves, restricting to O4 GW-detectable sources. The shaded regions contain 50%, 90% and 99% of the KN light curves. Blue and red colors refer respectively to the g (484 nm) and z (900 nm) band. Colored circles show extinction-corrected AT2017gfo data rescaled to the median distance of our population (∼ 176 Mpc). The right-hand panel shows the cumulative distributions of apparent magnitude at peak, at 3 days and at 5 days after the merger (solid, dashed and dotted lines, respectively). and 99% of joint GRB prompt-and O4-detectable binaries 10 . The dashed black line is the 90% confidence region for the O4-detectable binary without the constrain on the GRB prompt detectability. The comparison between the GRB prompt detections by Fermi/GBM and the known cosmological population shows a broad consistency with the sample of short GRBs (SGRBs) with known redshift (D'Avanzo et al. 2018;Salafia et al. 2019, grey diamonds). The position of GRB170817A in this plane (Abbott et al. 2021e) is shown by the orange diamond, which is consistent by construction with the position of the small island in the left-most part of the plot, which represents events whose emission is dominated by the cocoon shock breakout component. DISCUSSION AND CONCLUSIONS In this Letter we presented our predictions for the detection rates and properties of KNae and GRBs (including both prompt and afterglow emission) that will be associated to double neutron star binary mergers to be detected during the 10 The detection rate corresponding to this region is shown by orange lines in Figure 1 and reported in Table 1. next GW detector network run O4, planned to start in December 2022. These predictions are based on a synthetic population of events with an observationally motivated mass distribution and event rate density, for which we computed GW signal-to-noise ratios, KN light curves, GRB afterglow light curves and prompt emission peak photon fluxes, enabling the direct evaluation of the detectability of each emission component for each event in the population. KNae are produced in 78% of mergers in our population, the remaining fraction being massive events that result in a prompt black hole collapse with neither disk nor ejecta (see Figure 8 in Appendix A.1). We find light curves that are intrinsically similar to, but on average slightly dimmer than, AT2017gfo (Fig. 2). Despite the larger median distance with respect to events detected in the previous runs, their apparent brightness in most cases (95% of events with an associated KN) will still exceed the typical limits reached in previous optical counterpart searches, but for a limited time (only the first night in the g band, few nights in the z band), making the detection and identification of these sources more challenging than it had been for AT2017gfo. 50% 90% Radio Optical X GW170817 Samples GW170817 Samples Figure 3. Peak νLν versus time for the GRB afterglow light curves associated to O4-detectable sources in our population. Solid and dashed contours contain 50% and 90% of the peaks, respectively. Red, green and blue colors indicate the radio (1.4 × 10 9 Hz), optical (4.8 × 10 14 Hz), X-ray (2.4 × 10 17 Hz) bands, respectively. The colored circles are the observed data of GRB170817A (Makhathini et al. 2021 . Rest-frame SED peak photon energy E peak versus the isotropic-equivalent energy Eiso for our NSNS population. The filled green colored regions contain 50%, 90% and 99% of the binaries both GRB Prompt-and O4-detectable. The magenta lines contain 50%, 90% and 99% (solid, dashed and dotted, respectively) of the GRB Prompt-detectable binaries. The black dashed line contains 90% of the O4-detectable binaries. The black dots with error bars represent a SGRB sample for comparison (Salafia et al. 2019). The orange dot is GRB170817A. Relativistic jets are produced in 52% of the events in our population. Their GRB prompt emission exceeds our as-sumed limits in only a few percent of the events, with only a minor improvement when considering the deeper thresholds representative of a sub-threshold search in the gammaray detector data. A more promising route for the association of a GRB with a NSNS event in O4 will be that of a subthreshold GW event search in response to a gamma-ray trigger, which results in a joint detection rate of 0.75 +1.16 −0.55 yr −1 in our model, thanks to the expanded GW horizon. Radio observations represent the best route for the detection of the relativistic jet afterglow when monitoring a welllocalised event. Indeed, radio afterglows are brighter than our 'candidate monitoring' detection limits in around one tenth of the simulated events, corresponding to a detection rate of 0.78 +1.21 −0.58 yr −1 (achievable if all candidates are localised to arc-second accuracy through the detection of their KN emission). These predictions indicate that one new relativistic jet counterpart in O4, which would constitute an important new piece of information on these sources, is not unlikely, yet not guaranteed. For what concerns the observable properties of the relativistic jet counterparts, if a fortunate GRB prompt emission event will be detected, we expect it to be dominated by either the cocoon shock breakout emission component (for events closer than ∼ 100 Mpc), or more likely by emission from the slower, less energetic material that surrounds the jet core, if a mechanism similar to that which produces the prompt emission of cosmological GRBs extends to two-three times the jet core opening angle (Fig. 4). A due caveat here is that it is unclear to which extent the (poorly known) prompt emission mechanism of GRBs operates outside the jet core and, conversely, the current understanding of shock breakout emission does not extend to highly anisotropic, highly relativistic cases, making any statement on the observable properties of the shock breakout from parts of the cocoon closer to the jet axis highly uncertain. The observable GRB afterglows (Fig. 3) are expected to display similar properties as those of GRB170817A, that is, a shallow increase in flux density over a few months after the merger, followed by a peak and a relatively fast decay afterwards. Still, a few percent of the detectable events in our population feature an earlier peak, corresponding to a smaller viewing angle, which would constitute an interesting case study that would bridge the gap between the viewing angles of cosmological GRBs and that of GRB170817A. In the last years, several works predicting the joint GW+EM detection rates during O4 have been published or circulated as pre-prints, each focusing on a single or at most two EM counterparts (Zhu et with respect to studies that used the O2 estimate (which was higher by a factor of around three), our joint GW+EM detection rate predictions are in general agreement with most of these previous works. In particular, both Zhu et al. (2021) and Mochkovitch et al. (2021) find a similarly large fraction 60-80% of KNae detectable with similar thresholds as ours; factoring in the different fraction of jet-launching events (52% in this work, compared to 100% in the others), our estimate, that up to 10% of the afterglows will be detectable in radio, is in agreement with the 20% estimated by Duque et al. (2019) and Saleem et al. (2018). The prediction that only few percent of the NSNS events detectable in O4 through GW emission will have a detectable short GRB is in line (again factoring in our 52% fraction of jet-launching systems) with, e.g., Belgacem et al. (2019), Howell et al. (2019) and Yu et al. (2021), while Saleem (2020) and Mogushi et al. (2019) find somewhat higher fractions (but note that the estimate for sub-threshold GW detections triggered by GRB detec-tions from Saleem 2020 is in good agreement with ours). Still, it is worth stressing the fact that the entirety of these previous models assume either identical properties for all counterparts, or they use empirical parametrizations for the distributions of their properties. As a final remark, our estimates make GRB170817A an extremely lucky event (in line with, e.g., Mochkovitch et al. 2021, but see also Perna et al. 2021), which is not going to repeat soon. Given the excellent agreement of our model predictions with the short GRB cumulative peak flux distribution observed by Fermi/GBM and Swift/BAT (see Figure 11 in the Appendix), we consider this a robust statement. Still, we caution that all our predictions are based on loose observational constraints, and carry systematic uncertainties that have not yet been explored, due to the complexity of the full population modeling. The sinergy between gravitational and electromagnetic telescopes in future runs will provide us with more observations, allowing to get closer and closer to the real physics of these events. (Galaudage et al. 2021), and arguments based on the incidence of jets (Salafia et al. 2022), clearly point to a relatively broad distribution. With the aim of defining a simple mass distribution informed only by merging NSNS binaries (as opposed to those obtained by including also the masses of neutron stars in BHNS binaries, Landry & Read 2021; Abbott et al. 2021f), we devised the following method. We assumed the component mass probability distribution to be factorized into the chirp mass M c probability and the probability of the mass ratio q = M 2 /M 1 (assumed independent of each other), namely where J(M 1 , M 2 ) = M c /M 2 1 is the Jacobian that relates the two parametrizations (Callister 2021). We then adopted a parametrization for each of these unknown probability distributions, that is and where Θ is the Heaviside step function and α and β are free parameters. We fixed M c,max = 2 M , which for an equal-mass binary corresponds to M 1 = M 2 = 2.3 M (but we note that this choice does not impact our results significantly). We then looked for maximum-a measurement uncertainties, the posterior probability on our two parameters can be written as where π(M c,min ) and π(α) are the adopted priors. Given that the smallest observed chirp mass in a Galactic NSNS system is , while we adopted a broad uniform prior on α in the range 0 ≤ α ≤ 20. The resulting posterior probability density is shown in Figure 5, which shows that the maximum a posteriori probability density is at α = 8.67 and M c,min = 1.1 M , the latter being located on the edge of the prior support (which is based on the lowest chirp mass observed in Galactic NSNS binaries). This tells us that the estimate of M c,min is informed by EM Galactic NSNS observations, in addition to GW NSNS merger observations: in that sense, this is a multi-messenger estimate. In order to constrain the mass ratio distribution parameter β, we used instead the observed Galactic NSNS mass posteriors from Farrow et al. 2019. Their sample comprises N = 10 NSNSs that will merge within a Hubble time, for each of which they provide N s = 10 4 component mass posterior samples. We constructed mass ratio posterior samples {q i,j } i=1,...N ; j=1,..,Ns from these samples, adopting the appropriate mass ordering to ensure q ≤ 1 for each posterior sample pair. The posterior probability on the β parameter based on these samples is then We adopted a uniform prior π(β) in the range 0 ≤ β ≤ 40. The resulting posterior probability distribution is shown in Figure 6, which shows a large uncertainty, but a well-defined peak at β = 14. Figure 7 compares the observed Galactic NSNS mass ratio cumulative distribution and our mass ratio distribution model P (q | β) with the maximum-a-posteriori value β = 14. It is instructive to compare our mass probability distribution with others in the literature. To that purpose, we show in the right-hand panels of Figure 9 a comparison of the probability distributions of component masses implied by our result (red lines) with the corresponding distributions from a recently published population synthesis model (Broekgaarden et al. 2021, their fiducial model) based on the COMPAS code (Riley et al. 2022), and with the result of the study by Galaudage et al. 2021, which models the Galactic NSNS population and the GW-detected NSNS binaries together. These comparisons show that, despite the large uncertainties and the simplifying assumptions, our results fall in a reasonably similar range as other results based on more refined methodologies. Last, but not least, our mass distribution combined with our choice of the EoS leads to a large fraction of remnants that satisfy the basic requirements for the launch of a relativistic jet by the Blandford & Znajek 1977 process, namely a hyper-massive NS or a BH remnant and a non-negligible accretion disk, as required by the high observed incidence of jets (see Salafia et al. 2022, who discuss this argument and the implied mass distribution constraints in detail). A.2. Redshift distribution Merging binary neutron stars are thought to form either from isolated stellar binaries or in dense stellar environments such as stellar clusters, in which dynamical interactions can play a non-negligible role in their formation and evolution (Smarr & Blandford 1976;Srinivasan 1989;Portegies Zwart & Yungelson 1998;Bhattacharya & van den Heuvel 1991). Taking into account the strong dependence of the GW-driven coalescence timescale t c,GW on the binary separation a, t c,GW ∝ a 4 , and expressing the probability distribution of a as a power law with index x, namely dP/da ∝ a x , the probability distribution of the delay time between the start of the GW-driven inspiral and the coalescence is dP/dt c,GW = (dP/da)(da/dt c,GW ) ∝ t −3/4+x/4 c,GW (Piran 1992). Being the result of a diverse and complex range of processes, it is reasonable to expect the separation distribution dP/da to be close to uniform in the logarithm, and hence x ∼ −1. This translates into a delay time distribution dP/dt c,GW that is also close to uniform-in-log, and the x/4 dependence ensures that this remains approximately true unless x is very large in absolute value. When the coalescence timescale t c,GW is longer than t SN2 , the time elapsed between the birth of the binary and the formation of the second neutron star, then the delay t d between the binary formation and its coalescence also follows the same power law; conversely, for very short GW coalescence timescales, the delay time t d ∼ t SN2 . These arguments lead to a delay time distribution of the form where t SN2 is the mean time to the second supernova, which we take as t SN2 = 50 Myr, appropriate for the lightest neutron star progenitors. This distribution is broadly consistent with the results of detailed binary stellar population synthesis models (e.g. Dominik et al. 2012). With the further simplifying assumption of a constant fraction of stellar mass going into binaries that end up as double neutron stars throughout the history of the Universe, the cosmic NSNS merger rate density can be then modelled aṡ where t = t(z) is the lookback time corresponding to redshift z, dV is the comoving volume element, andρ is the cosmic star formation rate density, for which we adopt the analytical form given in Madau & Dickinson 2014. A.3. Local rate density The normalization of the assumed NSNS merger rate density, namely the local neutron star merger rate densityρ(0) = R 0 , was set based on self-consistency of the total number of NSNS detections in the three past observing runs of the advanced GW detector network and the number expected given our chosen mass and redshift distributions. To do this in practice, we needed to estimate the effective time-volume searched by the LIGO-Virgo network during the three observing runs O1, O2 and O3, which can be defined as (e.g. Tiwari 2018) where dV /dz is the differential comoving volume (Hogg 1999), z max is any redshift beyond the O3 GW detectability horizon, and f det (< z max ) is the fraction of detectable NSNS mergers within z max . To estimate the latter, we took the publicly available LVK Collaboration O1+O2+O3 sensitivity study Monte Carlo samples (LIGO Scientific Collaboration et al. 2021), we re-sampled them to reflect our assumed mass and redshift distributions, and then computed f det (< z max ) as the fraction of events that satisfied our detectability cut SNR net ≥ 12 over the total within z max . This resulted in V eff = 5.1 × 10 −3 Gpc 3 . Given the actual number N obs = 2 of observed NSNS events that satisfy the same cut (i.e. GW170817 and GW190425), and given the total O1+O2+O3 effective observing time T = 1.23 yr (Abbott et al. 2021b, representing the total time span of observing periods with at least one active detector), we obtained the posterior on the local merger rate density R 0 (conditional on our assumed mass and redshift distribution) where we adopted the Jeffreys prior π(R 0 ) = R −1/2 0 . The resulting median and symmetric 90% credible interval are R 0 = 347 +536 −256 Gpc −3 yr −1 , which therefore includes the statistical Poisson uncertainty stemming from the small number of observed events, but not any model systematic uncertainty (which would result in a larger uncertainty, probably more akin to the ones from Abbott et al. 2021f), which is not explored here. B. EM EMISSION MODELS In the following section, we briefly describe the models employed to compute the EM emissions from the NSNS mergers in our synthetic population. We refer to Perego et al. 2017, Barbieri et al. 2019b, Breschi et al. 2021, Salafia et al. 2019and Salafia et al. 2020 for more detailed descriptions. B.1. Ejecta We divide the material ejected in a NSNS merger in three broad classes. The first component, the dynamical ejecta, is material ejected on dynamical timescales (∼ms) by either tidal forces operating during the last phases of the inspiral (which launches cold, highly neutron-rich material mainly close to the equatorial plane), or by shocks generated in the collision of the neutron star cores (which generates a higher-entropy, less neutron-rich component that is launched more isotropically). Depending on the specific angular momentum distribution of the NS decompressed matter, a certain fraction can be centrifugally supported, forming an accretion disk around the merger remnant. The accretion disk can then produce additional ejecta in the form of winds, on longer timescales. We divide these into 'wind' ejecta, carried along directions close to the polar axis by the neutrino flux produced in the inner, hotter regions of the disk during the neutrino-dominated phase (typically lasting few tens of ms, e.g. Just et al. 2015), and 'secular' ejecta, released due to viscous angular momentum transport on the viscous time scale (of the order of 1 s, e.g. Just et al. 2015) with a fairly isotropic distribution. In our model, we compute the ejecta properties, as a function of the binary parameters (namely the component masses and the EoS), using fitting formulae based on numerical simulations of the merger and post-merger dynamics. In particular we adopt the fitting formulae from Krüger &Radice et al. 2018, in order to compute the mass and average velocity of the dynamical ejecta, respectively. We instead compute the accretion disk mass using the fitting formula from , whose predictions are consistent with both symmetric and asymmetric NSNS merger numerical simulations presented in Radice et al. 2018, Kiuchi et al. 2019, Bernuzzi et al. 2020and Vincent et al. 2020. Finally, we compute the masses of the wind and secular ejecta by assuming that fixed fractions of these ejecta ξ w = 0.05 and ξ s = 0.2, respectively, go into these components (Perego et al. 2014;Fernández & Metzger 2016;Siegel & Metzger 2017;Fujibayashi et al. 2018). For each ejecta class we finally assume angular profiles of rest-mass density, average velocity and opacity identical to model ANI-DVN from Breschi et al. 2021. B.2. Kilonova We compute the kilonova light curves following Perego et al. 2017 (based in part on the works by Grossman et al. 2014 andMartin et al. 2015), with the additions described in Barbieri et al. 2019b andBreschi et al. 2021. In brief, the computation is based on a semi-analytical model in which axisymmetry relative to the direction of the binary angular momentum is imposed. The ejecta, assumed to be in homologous expansion, are divided into polar angle bins, and thermal emission at the photosphere of each angular bin along radial rays is computed following Grossman et al. 2014 andMartin et al. 2015, taking into account the projection of the photosphere in each bin. Figure 10 shows the time evolution of the distribution of KN brightness in the g and J bands for our population, computed with the above model and the prescriptions described in the text to link the KN ejecta properties to those of the progenitors, similarly as in Figure 2 (which referred to the g and z band, instead). B.3. Relativistic jet We assume the relativistic jet to be launched by the Blandford-Znajek mechanism, which requires a spinning BH surrounded by a magnetized accretion disk Znajek 1977 andKomissarov 2001). In the context of NSNS mergers, in order for these requirements to be fulfilled, the remnant must collapse to a BH on a time shorter than the disk viscous timescale, which restricts the possible merger outcomes to hypermassive neutron stars and prompt BH collapse only, that is, to remnants with a mass M rem ≥ 1.2 M TOV (e.g. Salafia et al. 2022), with the additional requirement that an accretion disk must form, which in prompt collapsing cases is possible if the binary is asymmetric (Bernuzzi et al. 2020). When these conditions are fulfilled, we compute the jet total injected energy as in Barbieri et al. 2019b as a function of the disk mass M disk and the remnant BH spin χ BH via the quantities Ω H = χ BH /2(1 + 1 − χ 2 BH ) and f (Ω H ) = 1 + 1.38Ω 2 H − 9.2Ω 4 H (Tchekhovskoy et al. 2010). The dimensionless constant is fixed by imposing the accretion-tojet energy conversion efficiency η = Ω 2 H f (Ω H ) to be η = 10 −3 when χ BH = 0.71, therefore matching the inferred efficiency in GW170817 (Salafia & Giacomazzo 2021). This leads to = 0.022. Part of this energy is spent by the jet in its propagation through the ejecta cloud. Following Duffell et al. 2018, we assume the energy needed for the jet to successfully break out of the ejecta to be E bkt = 0.05θ 2 j,0 E ej , where we set the jet opening angle at launch θ j,0 = 15 • and we compute the ejecta energy as the sum of the isotropic-equivalent energies (averaged within an angle θ j,0 from the polar axis, and accounting for their assumed angular profiles -see the previous section and Breschi et al. 2021) of the three considered ejecta components, E ej = E iso,ej,dyn + E iso,ej,wind + E iso,ej,sec . If E jet,0 ≤ E bkt , we consider the jet to be choked during the propagation and we neglect its emission (this happens in 1% of jet-launching systems in our population); otherwise, we assume the jet to successfully break out, with an available energy E jet = E jet,0 − E bkt . We assume jets that successfully break out to be endowed with a jet structure (angular energy and bulk Lorentz factor profiles) featuring a uniform core of half-opening angle θ c , surrounded by "wings" with power-law decreasing energy density and Lorentz factor. Explicitly where c = (E jet − E bkt )/πθ 2 c is the core energy per unit solid angle and Γ c is the core Lorentz factor. We keep the structure parameters identical across the population, fixing θ c = 3.4 • , s E = 5.5, Γ c = 251 and s Γ = 3.5, which are the best-fit values for the GRB170817A afterglow from Ghirlanda et al. 2019. B.3.1. Gamma-ray burst prompt emission As stated in the main text, we compute the prompt emission spectrum following Salafia et al. 2015Salafia et al. , 2019, assuming the conversion efficiency of jet energy into radiation to be η γ = 0.15 in regions of the jet with Γ ≥ 10, and zero otherwise. The isotropic equivalent specific luminosity at observer frequency ν, as measured by an observer who sees the jet under a viewing angle θ v , and under the assumption of a viewing-angle-independent emission duration T , is then given by (Salafia et al. 2015) where z is the source redshift, θ γ is the angle such that Γ(θ γ ) = 10 (which is θ γ = 8.7 • with our parameters), δ is the relativistic Doppler factor of material located at spherical angular coordinates (θ, φ), and S ν is the comoving spectral shape, which we assume to be a power law with an exponential cut-off, S ν ∝ (ν ) a exp −(1 + a)ν /ν p , with a = 0.24 and hν p = 3 keV (h here is Planck's constant), similarly as in Salafia et al. 2019, and the normalization is such that S ν dν = 1. To account for the contribution of a putative shock breakout emission component, for viewing angles θ v < 60 • we include an additional emission component with identical properties as GRB170817A, namely an isotropic-equivalent luminosity L iso = 10 47 erg/s and a cut-off power law spectrum (same shape as the assumed prompt emission spectrum) with hν p = E peak = 185 keV and a = 0.38 (Abbott et al. 2021e). From the specific luminosity we obtain the photon flux in the [hν 0 , hν 1 ] observing band as where d L is the source luminosity distance. Figure 11 shows the inverse cumulative distributions of photon fluxes in the [10,1000] keV (blue) and [15,150] keV (red) bands for our population (dashed lines), and the corresponding distributions for short GRBs observed by Fermi/GBM and Swift/BAT, respectively (solid lines, with the shaded area showing the 90% confidence regions, including both measurement uncertainties and Poisson count statistics). The distributions for our model are computed accounting for the duty cycle and field of view factors for each instrument, for a fair comparison. 50% 90% 99% Figure 12. Viewing angle θv versus redshift z for our NSNS population. The filled grey regions contain 50%, 90%, 99% of the GW O4detectable binaries. Solid, dashed and dotted contours contain 50%, 90%, 99% of the binaries that exceed both the O4 GW SNRnet limit and any one of the 'couterpart search' limits relevant to the particular counterpart considered (blue: KN; red: GRB afterglow; orange: GRB prompt). The corresponding detection rates are reported in Figure 1. B.3.2. Afterglow The afterglow emission model is described in Salafia et al. 2019 andBarbieri et al. 2019b. In brief, this is a semi-analytical model based on standard afterglow theory (Sari et al. 1998;Panaitescu & Kumar 2000), extended to the case of an inhomogeneous jet and an off-axis viewing angle. The shock dynamics model is valid in both the ultra-relativistic and the non-relativistic regime, but it does not include lateral expansion. The emission model only includes synchrotron emission, assuming constant (throughout the evolution and independent of the angle) the relativistic electron energy 'equipartition' parameter e = 0.1, the magnetic field equipartition parameter B = 10 −3.9 and p = 2.15, based on GRB170817A . Syncrotron self-absorption is included. The interstellar medium in which the shock expands is assumed to have a uniform number density of 5 × 10 −3 cm −3 . The surface brightness is computed locally based on the fluid properties behind the shock, and the flux is computed by integrating the surface brightness over equal-arrival-time surfaces at the relevant viewing angle, accounting for relativistic effects. B.4. Viewing angle versus redshift In Figure 12 we show the distribution of some sub-samples of our population in the viewing angle θ v versus redshift z plane. Grey filled contours refer to HLVK O4-detectable binaries, while empty contours refer to joint GW and EM detectable binaries: in particular the blue, orange and red lines refer to KN+O4, GRB Prompt+O4 and GRB Afterglow+O4 detectable binaries, respectively. The detection rates corresponding to these regions are shown by lines of the same color in Figure 1 and are reported in Table 1. The figure clearly shows the weak dependence on redshift for the jet-related emission, whose luminosity is strongly dependent on the viewing angle. Moreover, 90% (50%) of the GRB Prompt+O4 and GRB Afterglow+O4 events have relativistic jets seen under a viewing angle lower than ∼ 15 (10) degrees. B.5. Detection rate versus detection limit In section 2 we report the detection rates for joint GW and EM events considering two representative detection limit sets based on the two main scenarios considered in this work. In order to allow the community to explore alternative observing configurations that correspond to different detection limits, we show in Figure 13 the distribution of the detection rates as a function of the detection limit for the GRB Prompt+O4 (upper panel, orange) and GRB Afterglow+O4 (lower panel, red) detectable binaries (for KNae, such information is already contained in the right-hand panels of Figures 2 and 10). GRB Afterglow+O4 Radio Optical X Figure 13. Detection rates as a function of the detection threshold limit for our NSNS population. The upper panel refers to GRB Prompt+O4 detectable binaries. The solid line indicates an all-sky field of view with a 100% duty cycle, the dashed and dotted lines accounts for the Fermi/GBM and Swift/BAT duty cycle and field of view, respectively. The lower panel refers to GRB Afterglow+O4 detectable binaries. The solid, dashed and dotted lines indicate the radio, optical and X band, respectively. For the GRB Prompt+O4 detection we show the rates assuming an all-sky field of view and a 100% duty cycle (solid line) and accounting for the duty cycle and field of view of Fermi/GBM (dashed line) and Swift/BAT (dotted line). The figure shows how the GRB prompt+GW detection rate increases with the prompt emission detector sensitivity: if it were possible to reach photon flux threshold values of ∼0.1 ph cm −2 s −1 , the cocoon emission would start to be detected in essentially all jet-launching binaries (this produces the bump in the orange lines at the low-flux-limit end). For the GRB Afterglow+O4 events, we show individually the rates for the radio (solid), optical (dashed) and X (dotted) bands. The detection limit value at which the curves saturate indicates the sensitivity needed to detect all the GRB Afterglow+O4 events, with a corresponding detection rate of 4.0 +6.1 −3.0 yr −1 (that is the GW O4 detection rate of 7.7 +11.9 −5.7 yr −1 times the 52% fraction of jet-launching system).
2022-04-19T07:50:01.442Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "d64861dd6ae4e52e3fa2c217cd8b80def19f44f3", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "d64861dd6ae4e52e3fa2c217cd8b80def19f44f3", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
266406364
pes2o/s2orc
v3-fos-license
The effect of intake channel length on water temperature at the intake point of the power plant at Muara Karang power plant . Muara Karang Power Plant (MKPP) is one of the main power plants on Java Island in Indonesia. Presently, the Jakarta provincial government has issued a reclamation project on Island G in the marine waters around MKPP. This reclamation effort is predicted to lead to a rise in the seawater temperature around the intake, which MKPP will address with the addition of intake channel of 250 - 957 m. Therefore, this study aimed to determine the effect of intake channel extension on the water temperature at the intake point using numerical modeling comprising hydrodynamics and dispersion advection modules. A total of 10 scenarios were modeled by varying intake channel length and season. The result showed that adding intake channel was less effective because the average water temperature was less than 0.24 o C with an effectiveness below 0.78%. Based on the validation of the modeling results on the measurement data, the NRMSD values in west and east seasons were 9.13% and 12.63%, respectively. Under existing conditions, the average and maximum seawater temperatures were 31.40 o C and 32.08 o C. Meanwhile, by extending intake channel, the average and maximum water temperatures were 31.16 o C and 31.60 o C. These results showed that by extending intake channel, the temperature at the intake point was generally lower than the existing conditions. Intake channel length was more effective in reducing the temperature at the intake point during west monsoon than east monsoon. Vertically, the temperature at the bottom was relatively colder than near the surface. In west monsoon, the average temperature difference between the bottom and the surface ranged from 0.16-0.21 o C, while in east, it was between 0.23 and 0.50 o C. In conclusion, the addition of subsequent structures to increase effectiveness was necessary, specifically to hold hot water in east monsoon. Introduction The operation of the condenser system in the Steam Power Plant (SPP) and Gas & Steam Power Plant (GSPP) depend on a considerable amount of cooling water.On average, a coal-based power plant with cooling towers requires approximately 5 to 7 cubic meters (m 3 ) of water per megawatt-hour (MWh) (Tasnim, 2020).Seawater at ambient temperature is pumped directly into the condenser through the inlet pipe, an important equipment in the turbine-boiler thermodynamic cycle of power plants.As a result, it exits the condenser at an increased temperature through the outfall. The efficiency of the SPP or GSPP system is related to the quantity and temperature of the available cooling water (Wibowo & Asvaliantina, 2018;Genbach et al., 2021).It is important to stress that for every 1 o C increase in cooling water temperature, the efficiency of power plant system decreases by approximately 0.168% (Darmawan & Yuwono, 2019).Therefore, most SPP or GSPP facilities are strategically built in coastal areas with unlimited water sources. Power generation activities tend to discharge wastewater back into the sea through cooling canals to reduce its temperature.However, after passing through these cooling channels, the water remains warmer than the surrounding sea.This temperature differential is mainly governed by the following physical processes: advection and diffusion (Cahyana, 2011;Panigrahi & Tripathy, 2011;Mirza et al., 2021).The distribution of heat in the cooling canal is influenced by several factors, such as the volume and discharge rate of wastewater, initial wastewater temperature, bathymetry conditions, ambient seawater temperature, and the circulation patterns of ocean currents near the location of wastewater discharge into seawater bodies (Cahyana, 2011;Panigrahi & Tripathy, 2011;Fikri et al., 2020). Hot water discharged from power plants into natural water bodies was identified as a source of heat or thermal pollution (Harmon, 2021).Heat pollution is defined as any deviation or increase from the natural ambient temperature in an ecosystem, which could be due to high temperatures associated with industrial cooling activities or the release of warm water into rivers enclosed by large embankments (Dodds & Whiles, 2010;Geurdes, 2023).The discharge of hot water from cooling canals into seawater bodies can potentially disrupt the sustainability of coastal and marine ecosystems.Meanwhile, detrimental effects were observed when an increase in the ambient sea temperature exceeded the threshold for the survival of marine biota (Rosen et al., 2015;Aljohani et al., 2022).Dallas (2009) stated that all organisms had a certain temperature range for optimal growth, reproduction, and fitness, often called the optimum thermal regime.The high temperatures from power plant thermal discharges severely affect the benthic fauna living near the discharge outlets (Deabes, 2020).An increase in temperature was observed to induce stress or mortality in organisms (Geurdes, 2023). Hot water discharge not only affects the ecosystem but also influences the performance of electricity generators.This problem occurs when the hot water discharge reaches the location of power plant intake point.Generally, the water from the cooling system is hotter than the surrounding one, with temperatures reaching approximately 40°C (Yustiani et al., 2015;Fikri et al., 2020).At the same time, the temperature of the surrounding waters is relatively 30 o C.These power plants usually require 45 to 55 m 3 /s of water to cool every megawatt at full load (Fudlailah et al., 2015).The efficiency of power plant is highly dependent on the ambient water temperature.According to Petrakopoulou et al. (2020), every 10 o C increase in ambient water temperature decreases the efficiency by 0.3 to 0.7%.Makky and Kalash (2015) stated that an increase in temperature of 1 o C of cooling water from the environment decreased the power plant output and thermal efficiency by 0.45% and 0.12%, respectively.Therefore, cooling recirculation technology is important to reduce heat pollution and enhance power plant reliability (Miara et al., 2018). Muara Karang Power Plant (MKPP) is a steam and gas power plant situated on the north coast of Jakarta City, playing a significant role in the power generation infrastructure of Java Island.It is an integral part of the Jakarta Bay Ecosystem in the Java Sea, Indonesia, as shown in Fig 1 .MKPP is crucial in supplying electricity to Java Island, particularly in Jakarta, the capital city.The peak load demand for the Jakarta metropolitan area in 2015 reached approximately 7293 MW (BTIPDP-PTRIM, 2016). The operational activities of MKPP require discharge rates of + 180,500 m 3 /hour of water through two intakes and outfalls (PJB UP Muara Karang, 2020).Presently, the Jakarta Provincial Government is undertaking a reclamation project near MKPP on G Island.This reclamation is expected to increase the seawater temperature around the intake point, leading to disturbances in the natural water circulation and a degree of isolation from the open sea (BTIPDP-PTRIM, 2016).Furthermore, the effects of climate change result in a yearly increase in ambient temperatures.This rise significantly impacts gas turbine performance, mainly due to the high temperatures at the compressor inlet (Fatimah et al., 2019). To address the issue of hot wastewater affecting the temperature of the intake point, MKPP plans to extend intake channel by 957 m, placing its end approximately 2 kg from the beach.However, this extension project requires a significant cost.In order to evaluate the impact and effectiveness of the channel extension on the intake point temperature, it is necessary to study the computational modeling of effluent dispersion patterns from the steam power plant.In 2019, MKKP implemented several steps to improve its wastewater treatment plant, resulting in a 21% reduction in service water usage compared to the monthly average (Hutajulu et al., 2020). Modeling technology has been widely used as an effective approach to optimizing designs according to the conditions of an area (Aljohani et al., 2022).In the last few years, several numerical models, particularly those using Delft3D-FLOW, have been developed to investigate thermal pollution caused by power plant discharges (Laguna-Zarate et al., 2021).Typical examples include the application of numerical modeling with Delft3D-FLOW to simulate thermal plumes originating from the Veracruz Power Plant in the Gulf of Mexico (Durán-Colmenares et al., 2016), and the Yanbu-Saudi Arabia Power Plant (Aljohani et al., 2022).Numerical modeling of water thermal dispersion using MIKE has been carried out in different locations, including the Paiton Power Plant (Fudlailah et al., 2015;Fikri et al., 2020), Lekki Coast-Nigeria (Panigrahi & Tripathy, 2011), Bandar Abbas Power Plant-Iran (Abbaspour et al., 2006), and PT Kilang Pertamina International Kasim process plant in Sele Strait, West Papua (Yesaya et al., 2023).Additionally, the numerical simulation of water temperature in the R´ıo de la Plata River and Montevideo's Bay was conducted using the finite element numerical model RMA-10 in its 2D vertical integrated mode (Fossati et al., 2011). Several studies have been conducted on thermal dispersion, although none has examined or modeled the effect of extending intake channel.It is important to note that the heat distribution emanating from MKPP has the potential to extend over an area with a large size of 156 hectares (Mihardja et al., 1999).The numerical modeling results showed that the temperature around MKPP intake channel decreased by 1 to 3°C due to the reclamation master plan (Islami et al., 2020).The reclamation of G Island is expected to raise the average temperature at the intake location by 0.32 to 0.7 o C (Khoirunnisa et al., 2021).Furthermore, Suntoyo et al. (2021) conducted an experimental study on the hydraulics of the water intake channel (Suntoyo et al., 2021).The study conducted by Hananta (2018) focused on the impact of the reclamation of Island G using hot water dispersion.However, this study did not address the plan to extend intake channel.Presently, no investigation has been conducted on the effect of intake channel length on the cooling water temperature in power plant.Therefore, the current study was conducted to determine the effectiveness of extending MKPP intake channel.The results are expected to provide crucial insights for future strategies addressing this issue. Materials and Methods This study explored various intake channel length scenarios in a specific domain.The model simulation was performed using MIKE 3, a software package developed by the Danish Hydraulics Institute (DHI), which included hydrodynamic and dispersion advection modules.MIKE 3 is a widely recognized and reliable tool for consistently delivering highly accurate results across various applications.The modeling effort also included calculations near the emission source, using the Coupled to MIKE 3 Solution module.However, the verified results of hydrodynamic and thermal dispersion modeling using MIKE21 showed a similar pattern compared to field measurements (Abbaspour et al., 2006;Fikri et al., 2020).The results of the study conducted on the exhaust cooling system of LNG facilities in Kutch Bay, India, showed that the correlation coefficient between the modeling predictions and actual measurements was in the range of 86 and 98% (Gupta et al., 2014). Governing equation The modeling process in this study applied a hybrid equation system combining the hydrodynamic model, which is associated with current dynamics, and the thermal dispersion model.The hydrodynamic model incorporated the continuity (Equation 1) and momentum formula with depth smoothing, as stated in Equations 2 and 3.Meanwhile, the thermal dispersion model used the dispersion advection equation (Equation 4) (Danish Hydraulic Insitute, 2017).The numerical method applied in this model was the finite difference method. Equation of continuity: Equation of momentum in the x-direction: Equation of momentum in the y-direction: Where h(x,y,t) is the depth of water (=-d, m),  (x,y,t) is the variation of water depth in time (m), (x,y,t) is the surface elevation (m), p,q (x,y,t) is flux density in x and y (m 3 /s/m), u & v is the velocity at average depth in x and y, C(x,y) is the resistance of Chezy (m 1/2 /s), g is the acceleration due to gravity Governing equation for advection and dispersion: ), S is Qs, (cs-c), Qs is source/sink discharge (m 3 /s/m 2 ), and cs is component concentration in the source/sink discharge. Study Stages The sequence of stages in this study commenced by modeling ten distinct scenarios.Subsequently, the domain model was defined and built while considering both area and time.The preparation of input data for the setup and execution of the models followed this.The modeling results were validated through a comparison with field data.The final stage requires post-processing the modeling results for further analysis and presentation. Input data The advection-dispersion thermal modeling depended on the main input data shown in Table 1.However, the climate in Jakarta showed seasonal variations, with temperatures ranging from 25 to 31°C and 25 to 33°C during west and east seasons.In October, seawater temperature typically falls from 27 to 31°C.Humidity in Jakarta varies between 61% and 95%, and the average monthly rainfall amounts to 218.4 millimeters (mm).The wet season occurs between November and April, while May through October are typically dry (World Bank Group, 2021). The thermal dispersion modeling at the open boundary used results obtained from HD models, comprising critical factors such as surface elevation, current velocity, and direction.The HD model covered a larger domain, approximately 32 x 61 km, using the surface elevation data extracted from the Tidal Model Driver (TMD) for 2019, available at one-hour intervals (Padman & Erofeeva, 2005).The values associated with the location of outfall, and intake discharges from PLTU Muara Karang are 14.4 m 3 /s -30.4 m 3 /s, and 15.08 m 3 /s -35.07 m 3 /s, respectively (PJB UP Muara Karang, 2020). The wastewater temperatures at outfall 1 of MKPP during west and east monsoon were recorded at 35.5 o C and 35.2 o C, respectively.Meanwhile, at outfall 2, during west monsoon, the wastewater temperature was 34.7 o C, and in east, it decreased to 34.2 o C (PJB UP Muara Karang, 2020).The initial ambient West-Existing+500 Simulation in the west season with the length intake channel is the existing intake channel plus 500 m. 4 West-Existing+750 Simulation in the west season with the length intake channel is the existing intake channel plus 750 m. 5 West-Existing+957 Simulation in the west season with the length intake channel is the existing intake channel plus 957 m. 6 East-Existing Simulation in the east monsoon with the existing intake channel length 7 East-Existing+250 Simulation in the east season with the length intake channel is the existing intake channel plus 250 m. 8 East-Existing+500 Simulation in the east season with the length intake channel is the existing intake channel plus 500 m. 9 East-Existing+750 Simulation in the east season with the length intake channel is the existing intake channel plus 750 m. 10 West-Existing+957 Simulation in the east season with the length intake channel is the existing intake channel plus 957 m. temperature was set at 30.5 o C (PJB UP Muara Karang, 2020), with a horizontal and vertical dispersion of 0.01 m 2 /s and 0.0001 m 2 /s, respectively. Domain of model The reclamation of Island G had been completed, except for the validation modeling domain.The overall domain size is approximately 9 x 12 km, as shown in Figure 2.This domain is divided into three nested parts, with the smallest section centered around MKPP, featuring a maximum mesh area of 1000 m 2 .The input surface elevation at the open boundary was extracted from a larger model comprising the entirety of Jakarta Bay, covering an expansive modeling domain of relatively 61 x 32 km.Specifically, MKPP is equipped with two outfalls and the intake point for disposing of hot water waste and cooling it, respectively, all located in close proximity, as shown in Fig 2. This study used ten scenarios, defined by intake channel length and season, as shown in Table 2.The simulation period accounts for both west (January 2019) and east monsoons (July 2019). Validation of Modeling Results In this study, validation is a crucial step focused on assessing the performance of the hydrodynamic model, particularly the surface elevation conditions.This process relies on a detailed comparison of the modeling results with the measurement data obtained from the Kolinamil station, located at coordinates 106.89083 o E and -6.For the current velocity, due to the absence of direct measurement data, the validation process for this parameter was carried out qualitatively, drawing on the results from previous studies.Based on the existing map of Jakarta Bay, it was estimated that the current velocity in both the bay and near the coast was relatively 0.05 m/s (BAPPEDA Provinsi DKI Jakarta, 2015) . In general, the hydrodynamic conditions in Jakarta Bay are characterized by significant variability in the direction of the ocean's current, with an average current velocity of approximately 0.02 m (Pranowo et al., 2014).A similar study stated that the current velocity at Kayangan Station Teluk Jakarta ranges from 0.002 to 0.05, with prevailing directions predominantly to west and east (Surya et al., 2019).Based on the modeling results obtained in May 2019, the current speed ranged from 0.001 to 0.045 m/s, with dominant directions being west and east, as shown in Fig 4 .The thermal dispersion modeling results were validated in January and July for west and east seasons, respectively.Temperature measurements to validate the modeling results were performed at eight specific points by MKPP, as shown in Fig 5a (PJB UP Muara Karang, 2020).A visual representation of the temperature comparison between the measurement and modeling results at these eight points is shown in Figs 5b and c.This validation showed that the NRMSD values in west and east seasons were 9.13% and 12.63%, respectively.Validation can be performed using remote sensing techniques, assuming the image data is accessible.This result is similar to the approaches applied at the Gujarat Power Plant (Roy et al., 2022). Temperature Variation at The Water Intake Point The temperature of thermal wastewater affects seawater at the intake point during east and west monsoons.However, the longer intake channel, the lower the seawater temperature at the intake point (Shawky et al., 2015).Under existing conditions during west monsoon, the average and maximum seawater temperatures at the intake point are 31.40o C and 32.08 o C, respectively.However, when intake channels were extended, there was a significant reduction in the average and maximum water temperatures at the intake point, which dropped to 31.16°C and 31.60°C, as shown in Table 3.The same phenomenon occurred in east monsoon, although the maximum seawater temperature at the intake point was higher, except in the Existing+957m scenario. The seawater temperature at the intake point is slightly higher in east monsoon compared to west season.This variation is mainly attributed to the thermal wastewater from outfall 2, which flows through the Mutiara Beach canal, carried by prevailing currents and eastward winds, although partially obstructed by G Islands.Consequently, this phenomenon raises the temperature at intake channel mouth, and point.During west season, the presence of G Island slightly obstructs the flow of wind and currents from west, limiting the transfer of heat from outfall 1 toward the intake channel. Tides significantly influence the distribution pattern of hot discharge water from power plants (Mihardja et al., 1999;Fossati et al., 2011;Rosen et al., 2015;Wibowo & Asvaliantina, 2018).In west monsoon, the seawater temperature at the intake point is highest at low tide, as shown in Fig 6a .Meanwhile, in east monsoon, the highest mean temperature at the intake point coincides with the lowest tide, as shown in Fig 6b .This observation is associated with the lowest tide, during which wastewater from outfalls 1 and 2 is directed towards intake channel mouth, eventually reaching the intake point. In west monsoon, all scenarios concerning the extension of intake channel tend to reduce the seawater temperature at the intake point.The magnitude of this temperature decrease becomes more pronounced with longer intake channels, as shown in Table 3. Specifically, during west monsoon, the Existing+957 m scenario was unique, resulting in a decrease in the average water temperature at the intake point from 31.40°C to 31.16°C, marking a 0.77% reduction.During west monsoon, Existing+957 m scenario decreased the average water temperature at the intake point from 31.40 to 31.16 o C, marking a 0.77% reduction.The maximum temperature also declines, from 32.08°C to 31.60°C, representing a 1.50% decrease.This phenomenon is predictable because the longer intake channel positions its mouth farther away from the hot water waste disposal outlet.As a result, the water entering the intake point is cooler, which explains the observed temperature reduction. During east monsoon, the extension of intake channel scenarios does not generally reduce seawater temperature at the intake point.Meanwhile, only in the Existing+957m scenario can the average and maximum temperatures decrease at the intake point.This was mainly influenced by the thermal wastewater from outfall 2, which, under normal circumstances, moved from east to west but encountered obstruction from G Islands.The temperature at intake channel mouth at the intake is affected.Therefore, by comparing all scenarios other than Existing+957 m, the maximum temperature remained higher, as shown in Table 3.In east monsoon, Existing+957 m scenario only yielded a slight reduction in the average water temperature at the intake point from 31.72 to 31.51 o C (0.65%).At the same time, the maximum decreased from 32.16 to 32.02 o C, a 0.45% reduction. Hao et al. also studied at the Huadian Ministry of Power Plant to explore the impact of heated water retaining designs.The study showed that the walls and barriers reduced the maximum temperature at the intake point by 1.0 to 1.3 o C, with an average decrease of 0.2 o C. Another study showed that constructing a hot water retaining structure could reduce seawater temperature at power plant intake by 1.12 to 1.24% (Hao et al., 2020).The results suggested that adding intake channel in the 250 to 957m range was less effective compared to other heat-retaining structural designs.This present study reported that the average water temperature at the intake point decreased only by 0.01 to 0.24°C, corresponding to a 0.02 to 0.78% reduction. The temperature reduction at the intake point is highly dependent on various factors, including the design and length of the heat-retaining structure, distance from the outfall, and bathymetry (Shawky et al., 2015).Meanwhile, the extension of intake channel is more effective in reducing the temperature at the intake point during west monsoon compared to east.In east monsoon, the flow of hot water, which generally dispersed to west, was blocked by G Island, and resulted to higher temperatures at intake channel mouth than at the intake point.When intake channel length is limited to 250 m, 500 m, or 750 m, the effectiveness is reduced, thereby leading to an increase in temperature at the intake point. The water temperature specification at the condenser inlet differs between Steam Turbine Generators (STG 1, 2, and 3) at 30.4 o C and Steam Power Stations 4 and 5 at 32 o C (PJB UP Muara Karang, 2020).Therefore, the cooling water input for the Steam Power Station met the existing specifications in all scenarios.Pre-treatment is needed for the cooling water input at the Steam Turbine Generators to meet the specified requirements.One method often used is a cost-effective cooling pool that has a simple design but requires a large land area.By adopting this method, the discharge water temperature from the outfall was regulated to remain in 4°C of the ambient temperature, incurring an estimated cost of approximately Rp. 806 billion (Nurdini, 2017).Complying with the seawater quality standard, which demands a maximum temperature difference of 2°C from the ambient seawater temperature, requires substantial financial commitment. The estimated cost for constructing an average dike height of 4 m in the sea and 957 m extension of intake channel at MKPP was approximately Rp. 60.29 billion, with each meter costing approximately Rp. 63 million (Suranto et al., 2021).In a comparative context, the Bandar Abbas Thermal Power Plant in Iran built a 500 m long outlet channel in 2005 at an approximate cost of 634,147.8USD (Abbaspour et al., 2006). In the seawater surrounding MKPP, the macrobenthic diversity index is quite high, relatively 1.19, while evenness is 0.49 (Wulandari et al., 2021).Among the macrobenthic organisms, the Polychaeta and Mollusca groups are the dominant species.Water temperature greatly influences aquatic life, but because the temperature change is less than 0.5 o C, it does not have numerous impact on benthic organisms (Deabes, 2020). The abundance of phytoplankton in the waters around MKPP ranges from 70 to 620 ind/l (Islami, 2018).While the temperature tolerance limit for plankton was recorded at 35°C (Nybakken, 1992), other aquatic organisms, namely Fish 38.1 o C, Crustacea 37.9 o C and Mollusca 36.7 o C, also showed varying tolerance limits (Mihardja et al., 1999).It was reported that phytoplankton tend to be relatively unaffected by temperature fluctuations. During both west and east monsoons, there is a vertical temperature difference in the water, with the bottom being slightly cooler than the surface, as shown in Fig 7 (Hasita et al., 2013;Aji et al., 2017).This temperature variation is mainly due to the influence of sunlight, which heats the water's surface.In west monsoon, the average temperature difference between the bottom and the surface falls in the range of 0.16 to 0. The extension of intake channel had a more pronounced effect during east monsoon, mainly because the surface temperature of the intake point was increased due to the heat from the hot water discharge. Temperature variation along the intake channel The temperature conditions along the intake line for the five scenarios at the end of the simulation are shown in Fig 8 .During west monsoon, there is a typical pattern where the water temperature tends to be higher as it moves closer to intake channel mouth, with the exception of the existing conditions.The pattern is mainly influenced by the length of the channels; shorter ones, which are closer to outfall 1 on west, showed lower temperatures at the mouth.This is because the wastewater discharge contributed heat to intake channel mouth.The temperature at the intake point is consistently lower than the existing conditions, except for the Existing+250 m scenario.The intake point experienced higher temperatures, showing heat transfer.This deviation from the norm is due to the limited cooling effect from the surrounding air. In east monsoon, there is a prevailing pattern where the water temperature generally rises as it approaches intake channel mouth, except in the cases of the existing conditions and Existing+250 m scenario.This increase in temperature is associated with the intake channel length.Furthermore, the longer intake channels, the higher the temperature at the mouth.This phenomenon was majorly attributed to the movement of hot wastewater from outfall 2, which flowed through the Mutiara Beach canal.The wind and currents carry this heat from east to west, increasing temperatures at the mouths of Existing+750 m & Existing+957 m intake channels. The temperature at the intake point remains relatively stable, similar to the existing condition, except in the cases of Existing+750 m and Existing+957 m scenarios.It showed that in Existing+250 m and Existing+500 m scenarios, the heat was still transferred to the intake point, as there had been a significant cooling effect from the surrounding air. The snapshot modeling results when the temperature at the intake point was at its highest are shown in Figure 9.It shows the heat distribution pattern of MKPP outfall 1 and 2 wastewaters in west and east seasons.The red and blue colors in this representation show the highest and lowest temperatures, respectively.During west season, the predominant water flow is eastward, resulting in the major source of heat affecting intake channel being the wastewater discharge from outfall 1, located to west of intake channel.Meanwhile, the heat from outfall 2, situated to east of intake channel, does not significantly affect the channel during west monsoon, as shown in Figure 9.In these scenarios, intake channel length becomes a crucial factor.Longer intake In east season, the dominant flow direction is westward, leading to a significant influence on the water temperature at intake channel mouth by the hot wastewater originating from outfall 2, which flows through the Mutiara Beach canal.Meanwhile, the heat generated by wastewater from outfall 1, located west of the channel, has no discernible impact as it disperses westward, away from intake channel.In this particular situation, intake channel length is a critical factor.Longer intake channel shows higher water temperatures at the mouth because it is positioned closer to the end of the Mutiara Beach canal, where wastewater from outfall 2 flows.Therefore, it is essential to implement control measures or other modifications to prevent heat from wastewater, specifically during east season. Conclusion Intake channel length is more effective in reducing the temperature at the intake point during west monsoon than east.During west and east monsoons, the average temperature differences between the bottoms and the surfaces ranged from 0.16 to 0.21°C and 0.23 to 0.50°C, respectively.As a general trend, water temperature increased as it approached intake channel mouth.However, the incorporation of intake channel measuring 250 to 957 m was not considered optimal in the past because it resulted in only a marginal reduction in the average water temperature at the intake, ranging from 0.01 to 0.24°C, with an effectiveness of 0.02 and 0.78%.To effectively address this issue, the addition of another heat-retaining structure, such as a north-south dike positioned on east side of intake channel mouth, was essential.This measure was crucial for reducing the impact of wastewater from outfall 2, specifically during east monsoon. Fig 1 Fig 1 Location and Area of Study Thermal at MKPP the concentration of component (unit arbitrary), u & v is the horizontal velocity in x and y (m/s), w is vertical velocity Fig 2 Fig 2 Domain Thermal Dispersion Modeling at MKPP Notes: The modeling domain assumes that Island G's reclamation has been completed except for the modeling domain for validation | 136 ISSN: 2252-4940/© 2024.The Author(s).Published by CBIORE 10667 o S. The validation result is expressed through the NRMSD (Normalized Root Mean Square Deviation) value, which is measured at 1.354%.It means that the results of the model are in line with the field measurements, with an accuracy exceeding 98%.Fig 3 provide a visual representation of this consistency.A comparison of the modeled surface elevation and the measurement results is shown in Fig 3. Fig 3 Fig 3 Comparison of Surface Elevation between Modeling Result and Measurement Fig 5 Fig 6 Fig 5 Validation of Thermal Dispersion Modeling Results on Measurement Data (a) Location of Temperature Observation, (b) Temperatur Comparison Model Vs.Measurement in West Monsoon, (c) Temperatur Comparison Model Vs.Measurement in East Monsoon Fig 7 Fig 7 Temperature Comparison at The Surface and Bottom Layer at Cooling Water Intake Point (a) West Monsoon (b) East Monsoon 21 o C, as shown in Fig 7a.Meanwhile, this difference is larger in east monsoon, ranging from 0.23 to 0.50 o C, as shown in Fig 7b. Fig 8 Fig 8 Surface Temperature along Cooling Water Intake Channel (a) West Monsoon (b) East Monsoon Fig 9 Fig 9 Snapshot Modeling Results when The Temperature at the Intake Point is Maximum Table 2 Scenarios of thermal dispersion modeling at MKPP Table 3 The average and maximum temperature at the water intake point when west monsoon and east monsoon ISSN: 2252-4940/© 2024.The Author(s).Published by CBIORE
2023-12-21T16:17:43.711Z
2023-12-14T00:00:00.000
{ "year": 2023, "sha1": "b3d713c0fc2f7e3ee3d34ce152258ddcefa66496", "oa_license": "CCBYSA", "oa_url": "https://ijred.cbiore.id/index.php/ijred/article/download/57680/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b0f48d63b39adfa32f691b1ec7f7fdac0a23e75b", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [] }
1214338
pes2o/s2orc
v3-fos-license
Scaling Dynamics of a Massive Piston in a Cube Filled With Ideal Gas: Exact Results We continue the study of the time evolution of a system consisting of a piston in a cubical container of large size $L$ filled with an ideal gas. The piston has mass $M\sim L^2$ and undergoes elastic collisions with $N\sim L^3$ gas particles of mass $m$. In a previous paper, Lebowitz, Piasecki and Sinai considered a scaling regime, with time and space scaled by $L$, in which they argued heuristically that the motion of the piston and the one particle distribution of the gas satisfy autonomous coupled differential equations. Here we state exact results and sketch proofs for this behavior. is a large parameter of the model, and we are interested in the limit behavior as L → ∞. The mass m of gas particles is fixed. We will assume that M grows proportionally to L 2 and the number of gas particles N is proportional to L 3 , while their velocities remain of order one. The position of the piston at time t is specified by a single coordinate X = X L (t), 0 ≤ X ≤ L, its velocity is then given by V = V L (t) =Ẋ L (t). Since the components of the particle velocities perpendicular to the x-axis play no role in the dynamics, we may assume that each particle has only one coordinate, x, and one component of velocity, v, directed along the x-axis. When a particle with velocity v hits the piston with velocity V , their velocities after the collision, v ′ and V ′ , respectively, are given by where ε = 2m/(M + m). We assume that M + m = 2mL 2 /a, where a > 0 is a constant, so that ε = 2m M + m = a L 2 (1.3) When a particle collides with a wall at x = 0 or x = L, its velocity just changes sign. The evolution of the system is completely deterministic, but one needs to specify initial conditions. We shall assume that the piston starts at the midpoint X L (0) = L/2 with zero velocity V L (0) = 0. The initial configuration of gas particles is chosen at random as a realization of a (two-dimensional) Poisson process on the (x, v)-plane (restricted to 0 ≤ x ≤ L) with density L 2 p L (x, v), where p L (x, v) is a function satisfying certain conditions, see below, and the factor of L 2 is the cross-sectional area of the container. In other words, for any domain D ⊂ [0, L] × IR 1 the number of gas particles (x, v) ∈ D at time t = 0 has a Poisson distribution with parameter λ D = L 2 D p L (x, v) dx dv. Let Ω L denote the space of all possible configurations of gas particles in Λ L . For each realization ω ∈ Ω L the piston trajectory will be denoted by X L (t, ω) and its velocity by V L (t, ω). As L → ∞, space and time are rescaled as y = x/L and τ = t/L. (1.4) which is a typical rescaling for the hydrodynamic limit transition (see [LPS, CLS] for motivation and physical discussion). We call y and τ the macroscopic ("slow") variable, as opposed to the original microscopic ("fast") x and t. Now let Y L (τ, ω) = X L (τ L, ω)/L, W L (τ, ω) = V L (τ L, ω) (1.5) be the position and velocity of the piston in the macroscopic variables. The initial density p L (x, v) satisfies p L (x, v) = π 0 (x/L, v) where the function π 0 (y, v) is independent of L. Without loss of generality, assume that π 0 is normalized so that 1 0 ∞ −∞ π 0 (y, v) dv dy = 1 Then the mean number of particles in the entire container Λ L is equal exactly to In order to describe the dynamics by differential equations, we assume that the function π 0 (y, v) satisfies several technical requirements stated below. (P2) Discontinuity lines. π 0 (y, v) may be discontinuous on the line y = Y L (0) (i.e., "on the piston"). In addition, it may have a finite number (≤ K 1 ) of other discontinuity lines in the (y, v)-plane with strictly positive slopes (each line is given by an equation The requirements (1.6) and (1.7) basically mean that π 0 (y, v) takes values of order one. (P4) Velocity "cutoff ". Let This means that the speed of gas particles is bounded from above by v max and from below by v min . The requirements (P4) and (P5) are made to ensure that the piston velocity |V L (t, ω)| will be smaller than the minimum speed of the particles, with probability close to one, for times t = O(L). Such assumptions were first made in [LPS]. We think of D 1 , K 1 , c 1 , c 2 , v 1 , v 2 , v min , v max , π min and π max in (P1)-(P4) as fixed (global) constants and ε 0 in (P5) as an adjustable small parameter. We will assume throughout the paper that ε 0 is small enough, meaning that It is important to note that the hydrodynamic limit does not require that ε 0 → 0. The parameter ε 0 stays positive and fixed as L → ∞. This theorem establishes the convergence in probability of the random functions Y L (τ, ω), W (τ, ω) characterizing the mechanical evolution of the piston to the deterministic functions Y (τ ), W (τ ), in the hydrodynamic limit L → ∞. (H1) Free motion. Inside the container the density satisfies the standard continuity equation for a noninteracting particle system without external forces: for all y except y = 0, y = 1 and y = Y (τ ). Equation (2.1) has a simple solution , 1} for all r ∈ (0, s). Equation (2.2) has one advantage over (2.1): it applies to all points (y, v), including those where the function π is not differentiable. (H2) Collisions with the walls. At the walls y = 0 and y = 1 we have for v > W (τ ) (2.5) where v represents the velocity after the collision and 2W (τ ) − v that before the collision; here is the (deterministic) velocity of the piston. It remains to describe the evolution of W (τ ). Suppose the piston's position at time τ is Y and its velocity W . The piston is affected by the particles (y, v) hitting it from the right (such that y = Y + 0 and v < W ) and from the left (such that y = Y − 0 and v > W ). Accordingly, we define the density of the particles colliding with the piston ("density on the piston") by (H4) Piston's velocity. The velocity W = W (τ ) of the piston must satisfy the equation We also remark that for τ > 0, when (2.5) holds, i.e. the piston's velocity is the average of the nearby particle velocities on each side. The system of (hydrodynamical) equations (H1)-(H4) is now closed and, given appropriate initial conditions, should completely determine the functions Y (τ ), W (τ ) and p(y, v, τ ) for τ > 0. To specify the initial conditions, we set p(y, v, 0) = π(y, v) and Y (0) = 0.5. The initial velocity W (0) does not have to be specified, it comes "for free" as the solution of the equation (2.8) at time τ = 0. It is easy to check that the initial speed |W (0)| will be smaller than v min , in fact W (0) → 0 as ε 0 → 0 in (P5). Note that if the initial conditions at τ = 0 do not satisfy (2.3)-(2.5), there will be a discontinuity in p as τ → 0 (see also Remark 4 below). Equation (2.8) has a unique solution W as long as the piston interacts with some gas particles on both sides, i.e. as long as Indeed, the left hand side of (2.8) is a continuous and strictly monotonically decreasing function of W , and it takes both positive and negative values. The solution W (τ ) may not be continuous in τ , though. But if π(y, v, τ ) is piecewise C 1 and has a finite number of discontinuity lines with positive slopes (as we require of π 0 (y, v) in Section 1), then W (τ ) will be continuous and piecewise differentiable. Remark 3. One can easily check that the total mass M = π(y, v, τ ) dv dy and the total kinetic energy 2E = v 2 π(y, v, τ ) dv dy remain constant along any solution of our system of equations (H1)-(H4). Also, the mass in the left and right part of Λ L separately remains constant. Equation (2.8) also preserves the total momentum of the gas v π(y, v, τ ) dv dy, but it changes due to collisions with the walls. Remark 4. Previously, Lebowitz, Piasecki and Sinai [LPS] studied the piston dynamics under essentially the same initial conditions as our (P1)-(P5). They argued heuristically that the piston dynamics could be approximated by certain deterministic equations in the original (microscopic) variables x and t. The deterministic equations found in [LPS] correspond to our (2.2)-(2.6) with obvious transformation back to the variables x, t, but our main equation (2.8) has a different counterpart in the context of [LPS], which reads Here X = X(t) and V = V (t) =Ẋ(t) denote the deterministic position and velocity of the piston and π(x, v, t) the density of the gas (the constant a appeared in (1.3)). We refer to [LPS] for more details and a heuristic derivation of (2.9). Since (2.9), unlike our (2.8), is a differential equation, the initial velocity V (0) has to be specified separately, and it is customary to set V (0) = 0. Alternatively, one can set V (0) = W (0), see [CLS]. Equation (2.9) can be reduced to (2.8) in the limit L → ∞ as follows. One can show (we omit details) that (2.9) is a dissipative equation whose solution with any (small enough) initial condition V (0) converges to the solution of (2.8) during a t-time interval of length ∼ ln L. That interval has length ∼ L −1 ln L on the τ axis, and so it vanishes as L → ∞, this is why we replace (2.9) with (2.8) and ignore the initial condition V (0) when working with the thermodynamic variables τ and y. The equation (2.9) is not used in this paper. We now describe the solution of our equations (H1)-(H4) in more detail. Assume that for some τ > 0 the gas density π(y, v, τ ) satisfies the same requirements (P1)-(P4) as those imposed on the initial function π 0 (y, v) in Sect. 1, with constants D , whose values are not essential, but are independent of τ . We also assume an analogue of (P5), but this one is not so straightforward, since the piston does not have to stay at the middle point y = 0.5 at any time τ > 0. We require that and for any point for some sufficiently small ε ′ 0 > 0. Actually, the map (y, v) → (y * , v * ), which we denote by R τ , is one-to-one and will be explicitly constructed below. The constant ε ′ 0 here, just like ε 0 in (P5), is assumed to be small enough, and moreover with some constant C ′ 0 > 0. We now derive elementary but important consequences of the above assumptions. Since the density π(y, v, τ ) vanishes for |v| < v ′ min , so does the function q(v, τ ; Y, W ) defined by (2.7). Moreover, for all |W | < v ′ min , the function q(v, τ ; Y, W ) will be independent of W , and so we can write it as q(v, τ ; Y ). Also, equation (2.8) can be simplified: the factor sgn(v − W ) can be replaced by sgn v. Then, expanding the square in (2.8) reduces it to a quadratic equation for W : The integrals Q 0 , Q 1 , Q 2 have the following physical meaning: where m L , p L , e L represent the total mass, momentum and energy of the incoming gas particles (per unit length) on the left hand side of the piston, and m R , p R , e R -those on the right hand side of it. The value Q 2 also represents the net pressure exerted on the piston by the gas as if the piston did not move. Of course, if Q 2 (τ ) = 0, then we must have W (τ ) = 0, which agrees with (2.14). Next, under the above requirements on π(y, v, τ ), the function q(v, τ ; Y ) is, in a certain sense, nearly symmetric in v about v = 0 (see [CLS] for details). This fact implies that Q 0 and Q 2 are small, more precisely where C ′ > 0 is a constant depending on the parameters D ′ 1 , K ′ 1 , etc., but not on ε 0 . At the same time, the assumption (P3) guarantees that where Q 1,min is a constant depending on π ′ min , v ′ 1 , v ′ 2 , etc., but not on ε 0 . If ε 0 is small enough, there is a unique root of the quadratic polynomial (2.14) on the interval (−v ′ min , v ′ min ), which corresponds to the only solution of (2.8). Since this root is smaller, in absolute value, than the other root of (2.14), it can be expressed by where the sign before the radical is "−", not "+". Of course, (2.20) applies whenever Q 0 = 0, while for Q 0 = 0 we simply have W (τ ) = Q 2 /2Q 1 . Eqs. (2.18)-(2.20) imply an upper bound on the piston velocity: |W (τ )| ≤ B ′ ε 0 for some constant B ′ > 0 depending on D ′ 1 , K ′ 1 , etc., but not on ε 0 . A similar bound holds for the piston acceleration A(τ ) =Ẇ (τ ), since and |dQ i /dτ | = |(dQ i /dY )W | ≤ const·ε 0 , see [CLS] for more details. Next we consider the evolution of a point (y, v) in the domain G := {(y, v) : 0 ≤ y ≤ 1} under the rules (H1)-(H3), i.e. as it moves freely with constant velocity and collides elastically with the walls and the piston. Denote by (y τ , v τ ) its position and velocity at time τ ≥ 0. Then (H1) translates intoẏ τ = v τ andv τ = 0 whenever y whenever y τ −0 = Y (τ ). Note that (2.21) corresponds to a special case of the mechanical collision rules (1.1)-(1.2) with ε = 0 (equivalently, m = 0). Hence the point (y, v) moves in G as if it was a gas particle with zero mass. The motion of points in (y, v) is described by a one-parameter family of transformations F τ : G → G defined by F τ (y 0 , v 0 ) = (y τ , v τ ) for τ > 0. We will also write F −τ (y τ , v τ ) = (y 0 , v 0 ). According to (H1)-(H3), the density π(y, v, τ ) satisfies a simple equation for all τ ≥ 0. Also, it is easy to see that for each τ > 0 the map F τ is one-to-one and preserves area, i.e. det |DF τ (y, v)| = 1. Now, because of (P4), the initial density π 0 (y, v) can only be positive in the region hence we will restrict ourselves to points (y, v) ∈ G + only. At any time τ > 0, the images of those points will be confined to the region G + (τ ) := F τ (G + ). In particular, π(y, v, τ ) = 0 for (y, v) / ∈ G + (τ ). The map R τ : (y, v) → (y * , v * ) involved in (2.11) and (2.12) can now be defined as is a simple reflection "across the piston" at time τ = 0. We now make an important observation. If a fast point (y τ , v τ ) collides with a slow piston, |W (τ )| ≪ |v τ |, they cannot recollide too soon: the point must travel to a wall, bounce off it, and then travel back to the piston before it hits it again. Therefore, as long as (P1)-(P4) hold, the collisions of each moving point (y τ , v τ ) ∈ G + (τ ) with the piston occur at well separated time moments, which allows us to effectively count them. For (x, v) ∈ G + N(y, v, τ ) = #{s ∈ (0, τ ) : y s = Y (s), v s = W (s)} is the number of collisions of the point (y, v) with the piston during the interval (0, τ ). For each τ > 0, we partition the region G + (τ ) into subregions is occupied by the points that at time τ have experienced exactly n collisions with the piston during the interval (0, τ ). Now, for each n ≥ 1 we define τ n > 0 to be the first time when a point (y τ , v τ ) ∈ G + (τ ) experiences its (n + 1)-st collision with the piston, i.e. τ n = sup{τ > 0 : G + n+1 (τ ) = ∅} In particular, τ 1 > 0 is the earliest time when a point (y τ , v τ ) ∈ G + (τ ) experiences its first recollision with the piston. Hence, no recollisions occur on the interval [0, τ 1 ), and we call it the zero-recollision interval. Similarly, on the interval (τ 1 , τ 2 ) no more than one recollision with the piston is possible for any point, and we call it the one-recollision interval. The time moment τ * mentioned in Theorem 1.1 is the earliest time when a point (y τ , v τ ) ∈ G + (τ ) either experiences its third collision with the piston or has its second collision with the piston given that the first one occurred after τ 1 . Hence, τ * ≤ τ 2 , and actually τ * is very close to τ 2 , see below. The following theorem summarizes the properties of the solutions of the hydrodynamical equations (H1)-(H4). Lastly, we demonstrate the reason for our assumption that all the discontinuity curves of the initial density π 0 (y, v) must have positive slopes. It would be quite tempting to let π 0 (y, v) have more general discontinuity lines, e.g. allow it be smooth for v min < |v| < v max and abruptly drop to 0 at v = v min and v = v max . The following example shows why this is not acceptable. Example. Suppose the initial density π 0 (y, v) has a horizontal discontinuity line v = v 0 (say, v 0 = v min or v 0 = v max ). After one interaction with the piston the image of this discontinuity line can oscillate up and down, due to the fluctuations of the piston acceleration (Fig. 1). As time goes on, this oscillating curve will "travel" to the wall and come back to the piston, experiencing some distortions on its way, caused by the differences in velocities of its points (Fig. 1). When this curve comes back to the piston again, it may well have "turning points" where its tangent line is vertical, or even contain vertical segments of positive length. This produces unwanted singularities or even discontinuities of the piston velocity and acceleration. The same phenomena can also occur when a discontinuity line of the initial density π 0 (y, v) has a negative slope. Sketch of the argument Our proof of Theorem 1.1 is based on large deviation estimates for the Poisson random variable: Lemma 3.1 ( [CLS]) Let X be a Poisson random variable with parameter λ > 0. For This shows that the probabilities of large deviations rapidly decay, as they do for the Gaussian distribution. The principal step in our proof of Theorem 1.1 is the velocity decomposition scheme described next. Let V L (t, ω) be the velocity of the piston at time t ≥ 0 for a random configuration of particles ω ∈ Ω L . Let ∆t > 0 be a small time increment. Then the law of elastic collision (1.1) implies Here k = k(t, ∆t, ω) is the number of particles colliding with the piston during the time interval (t, t+∆t), and v j are their velocities numbered in the order in which the particles collide. We rearrange the formula (3.1) as follows: where and Let us assume that the fluctuations of the velocity V L (s, ω) on the interval (t, t + ∆t), are bounded by some quantity δV : Consider two regions on the x, v plane: and Each of them is a union of two trapezoids where D − i denotes the upper and D + i the lower trapezoid, see Fig. 2. Note that D 1 ⊂ D 2 . The bound (3.3) implies that all the particles in the region D 1 necessarily collide with the piston during the time interval (t, t + ∆t). Moreover, the trajectory of every point (x, v) ∈ D 1 hits the piston within time ∆t. The bound (3.3) also implies that all the particles actually colliding with the piston during the interval (t, t + ∆t) are contained in D 2 . Let us denote by k ± r the number of particles in the regions D ± r for r = 1, 2 at time t. We also denote by k − the number of particles actually colliding with the piston "on the left", and by k + that number "on the right" (of course, k − + k + = k). Due to the above observations, k ± 1 ≤ k ± ≤ k ± 2 . Now, suppose that t + ∆t < τ 1 L. Then we show that for typical configurations ω the particles in each domain D r , r = 1, 2, have never collided with the piston before. Therefore, their number, k ± r , r = 1, 2, satisfies the laws of Poisson distribution, in particular, the large deviation estimate in Lemma 3.1 applies. This gives the bound (for typical ω) and F −t L corresponds to the action of F −τ = F −t/L in the original time-space coordinates x, t. The deviations ∆k ± r in (3.6) can be adjusted by using Lemma 3.1. The difference λ ± 2 − λ ± 1 is estimated by By putting all these estimates together we get tight bounds on k in (3.2). Similarly we get bounds on k j=1 v j in (3.2). The following is the final result of this analysis: Here and Q 0 , Q 1 , Q 2 are defined similarly to (2.15)-(2.17), in which Y (τ ) must be replaced by the actual piston position X L (t, ω)/L. The error term χ 3 in (3.7) is bounded by which corresponds to Brownian motion-type random fluctuations. The term D(t, ω) in (3.7) represents the main ("deterministic") force acting on the piston. The term χ 3 describes random fluctuations of that force. When the piston velocity stabilizes, then the main force D should vanish, and an "equilibrium" velocitȳ V L (t, ω) will be established. The latter is the root of the equation D(t, ω) = 0, which is ) is a very slowly changing function of t, whose derivative is small: |dV L (t, ω)/dt| ≤ const · L −1 ε 0 . As a result, V L will always stay close toV L , more precisely on the entire zero-recollision interval 0 < t < τ 1 L. Now, the piston coordinate Y L (τ, ω) = X L (τ L, ω)/L is the solution of the differential equationẎ L = V L =V L + χ 4 where |χ 4 | < const · L −1 ln L by (3.11). On the other hand, the deterministic piston coordinate Y (τ ) is the solution of the equationẎ = W , and bothV and W are given by the same radical expression, cf. (2.20) and (3.10). Lastly, a simple application of Gronwall's inequality completes the proof of Theorem 1.1 on the zero-recollision interval (0, τ 1 ). The proof on the one-recollision interval (τ 1 , τ * ) goes along the same lines. One major difference is that the number of particles k ± r , r = 1, 2, in the domain D ± r constructed in the velocity decomposition scheme is no longer a Poisson variable, so Lemma 3.1 does not apply directly. To handle this new situation, we pull the domain D ± r back in time, as we did before. But now that pullback involves one interaction with the piston (corresponding to the first collision of the particles in D ± r with the piston, which occurs during the zero-recollision interval 0 < t < τ 1 L). Since the piston position and velocity at the moment of that first collision are random, the preimage of D ± r will be a random domain. Its shape will depend on the piston velocity V (t, ω) during the zero-recollision interval 0 < t < τ 1 L. We observe that the boundary of the preimage of D ± r is described by a random, yet Hölder continuous function, and its Hölder exponent is 0.5 due to (3.9). Then we pick a small d > 0 and construct a d-dense set in the space of all Hölder continuous functions in the spirit of a work by Kolmogorov and Tihomirov [KT]. The elements of that d-dense set can be used to construct a finite collection of (nonrandom) domains, so that one of them will approximate the (random) preimage of our D ± r (we need to select the small d > 0 carefully to ensure sufficient accuracy of the approximation). Now the number of particles in our random domain (the preimage of D ± r ) can be approximated by the number of particles in the corresponding nonrandom domain. The latter has Poisson distribution, and finally we can apply Lemma 3.1. This trick gives necessary estimates on k ± r . A full proof of Theorem 1.1 is given in [CLS]. At present, we do not know if this theorem can be extended beyond the critical time τ * , this is an open question. Some other open problems are discussed in the next section. 4 Discussion and open problems 1. The main goal of this work is to prove that under suitable initial conditions random fluctuations in the motion of a massive piston are small and vanish in the thermodynamic limit. We are, however, able to control those fluctuations effectively only as long as the surrounding gas particles can be described by a Poisson process, i.e. during the zerorecollision interval 0 < τ < τ 1 . In that case the random fluctuations are bounded by const·L −1 ln L, see Remark 2 after Theorem 1.1. Up to the logarithmic factor, this bound is optimal, see [CL] and earlier estimates by Holley [H], Dürr et al. [DGL]. During the one-recollision interval τ 1 < τ < τ 2 , the situation is different. The probability distribution of gas particles that have experienced one collision with the piston is no longer a Poisson process, it has intricate correlations. We are only able to show that random fluctuations remain bounded by L −1/7 , see again Remark 2. Perhaps, our bound is far from optimal, but our numerical experiments reported in [CL] show that random fluctuations indeed grow during the one-recollision interval. We have tested numerically whether random fluctuations remained small after more than one recollision, i.e. at times τ > τ 2 . We found that for some initial π 0 they actually increased very rapidly, and we conjectured that the rate of increase was exponential in τ . We found, indeed, that at times τ ∼ log L the fluctuations became large even on a macroscopic scale, and then many unexpected phenomena occurred [CL]. Interestingly, the exponential growth of random fluctuations seems to be related to the instability of our hydrodynamical equations. We found that small perturbations of the initial density π 0 can grow exponentially in τ under certain conditions, matching the growth of random fluctuations of the piston motion in the mechanical model. We refer the reader to [CL] for further discussion and to our work in progress [CCLP]. 2. It is clear that in our model recollisions of gas particles with the piston have a very "destructive" effect on the dynamics in the system. However, we need to distinguish between two types of recollisions. We say that a recollision of a gas particle with the piston is long if the particle hits a wall x = 0 or x = L between the two consecutive collisions with the piston. Otherwise a recollision is said to be short. Long recollisions require some time, as the particle has to travel all the way to a wall, bounce off it, and then travel back to the piston before it hits it again. Short recollisions can occur in rapid succession. We have imposed the velocity cut-off (P4) in order to avoid any recollisions for at least some initial period of time (which we call the zero-recollision interval). More precisely, the upper bound v max guarantees the absence of long recollisions. Without it, we would have to deal with arbitrarily fast particles that dash between the piston and the wall very many times in any interval (0, τ ). On the other hand, the lower bound v min was assumed to exclude short recollisions. There are good reasons to believe, though, that short recollisions may not be so destructive for the piston dynamics. Indeed, let a particle experience two or more collisions with the piston in rapid succession (i.e. without hitting a wall in between). This can occur in two cases: (i) the particle's velocity is very close to that of the piston, or (ii) the piston's velocity changes very rapidly. The latter should be very unlikely, since the deterministic acceleration of the piston is very small, cf. Theorem 2.1c. In case (i), the recollisions should have very little effect on the velocity of the piston according to the rule (1.1), so that they may be safely ignored, as it was done already in earlier studies [H, DGL]. We therefore expect that our results can be extended to velocity distributions without a cut-off from zero, i.e. allowing v min = 0. 3. In our paper, L plays a dual role: it parameterizes the mass of the piston (M ∼ L 2 ), and it represents the length of the container (0 ≤ x ≤ L). This duality comes from our assumption that the container is a cube. However, our model is essentially one-dimensional, and the mass of the piston M and the length of the interval 0 ≤ x ≤ L can be treated as two independent parameters. In particular, we can assume that the container is infinitely long in the x direction (so, that L is infinite), but the mass of the piston is still finite and given by M ∼ L 2 . In this case there are no recollisions with the piston, as long as its velocity remains small. Hence, our zero-recollision interval is effectively infinite. As a result, Theorem 1.1 can be extended to arbitrarily large times. Precisely, for any T > 0 we can prove the convergence in probability: 4. Along the same lines as above, we can assume that the container is d-dimensional with d ≥ 3. Then the mass of the piston and the density of the particles are proportional to L d−1 rather than L 2 . When d is large, the gas particles are very dense on the x, v plane. This leads to a much better control over fluctuations of the particle distribution and the piston trajectory. As a result, Theorem 1.1 can be extended to the k-recollision interval (τ k , τ k+1 ), where k ≥ 1 depends on d. It can be shown that for any k ≥ 1 there is a d k ≥ 3 such that for all d ≥ d k the convergence (1.10) and (1.11) holds with τ * = τ k . Therefore, a higher dimensional piston is more stable than a lower dimensional one. It would be interesting to investigate other modifications of our model that lead to more stable regimes. For example, let the initial density π 0 (y, v) of the gas depend on the factor a = εL 2 in such a way that π 0 (y, v) = a −1 ρ(y, v), where ρ(y, v) is a fixed function. Then the particle density grows as a → 0. This is another way to increase the density of the particles, but without changing the dimension. One may expect a better control over random fluctuations in this case, too.
2014-10-01T00:00:00.000Z
2002-12-30T00:00:00.000
{ "year": 2002, "sha1": "e1a7ccf011fb2a28b0e0264495c016ed93b975c9", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/0212637", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "4cdc2d333b515b74eb3a3d0e6f521a223e6eeffb", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
221083244
pes2o/s2orc
v3-fos-license
Zero-Shot Heterogeneous Transfer Learning from Recommender Systems to Cold-Start Search Retrieval Many recent advances in neural information retrieval models, which predict top-K items given a query, learn directly from a large training set of (query, item) pairs. However, they are often insufficient when there are many previously unseen (query, item) combinations, often referred to as the cold start problem. Furthermore, the search system can be biased towards items that are frequently shown to a query previously, also known as the 'rich get richer' (a.k.a. feedback loop) problem. In light of these problems, we observed that most online content platforms have both a search and a recommender system that, while having heterogeneous input spaces, can be connected through their common output item space and a shared semantic representation. In this paper, we propose a new Zero-Shot Heterogeneous Transfer Learning framework that transfers learned knowledge from the recommender system component to improve the search component of a content platform. First, it learns representations of items and their natural-language features by predicting (item, item) correlation graphs derived from the recommender system as an auxiliary task. Then, the learned representations are transferred to solve the target search retrieval task, performing query-to-item prediction without having seen any (query, item) pairs in training. We conduct online and offline experiments on one of the world's largest search and recommender systems from Google, and present the results and lessons learned. We demonstrate that the proposed approach can achieve high performance on offline search retrieval tasks, and more importantly, achieved significant improvements on relevance and user interactions over the highly-optimized production system in online experiments. INTRODUCTION Most online content platforms, such as music streaming services or e-commerce websites, have systems that return top-K items either given a natural-language query (i.e., a search retrieval system), or given the user context, which can be the user attributes and user's interactions with the platform (i.e., a recommender system). These two systems share the same output item space, but have different * Equal contribution. † Corresponding author: Tao Wu: iotao@google.com. input feature spaces. In this paper, we study how to improve search retrieval by transferring learnings from recommender systems. Recently neural information retrieval (neural IR) models have been widely applied in search products across many industries [10,22,32]. Such methods can retrieve and score items that do not share keywords with the query. However, they usually require large amount of (query, item) pairs as training data, usually collected from users' search logs. However, these types of training data may not be available for many (query, item) combinations. This is often referred to as the cold start problem [13]. Consider, for example, an online music streaming system, where users mostly listen to music through homepage recommendations or playlists generated by other users. In this case, it is likely that many songs in the system have been listened by users, but not through search requests. This motivated us to ask: Can we utilize the abundant data from the recommender system to cold-start the search retrieval system of the same content platform? In this paper, we propose a new Zero-Shot Heterogeneous Transfer Learning framework (ZSL), which does not require any input from (query, item) search data, but learns the query and item representations completely via an auxiliary task derived from the recommender system. There are two advantages: first, this method can cold-start the search system as it does not require search data to train. Second, supervised methods trained on (query, item) can suffer from the potential bias and feedback loop [25] introduced by the search system. For large-scale search and recommender systems, there could easily be more than thousands of relevant items matching a query, but users are only able to explore a very limited amount. Therefore, the collected training data are heavily affected by the current search algorithm, exacerbating the "rich get richer" effect [3]. Thus, this framework, although motivated by the search retrieval cold-start problem, can also be useful even when adequate search data (query, item) is available. We assume that (item, item) correlations can be extracted from the recommender system, with text features for each item. The (item, item) correlations commonly exist [12], such as citation network of a bibliography system or music co-listening pairs of an online music platform, where the text features of items are titles, keywords or descriptions. The auxiliary task of the proposed framework is to predict neighbor items given a seed item, where the learned semantic representations will be transferred to the target task which is to predict items given a query. We explore two implementations under this framework. We call the first method Multi-task Item Encoder, where the item and the text features representations are jointly learned by optimizing the two tasks of predicting the item given its text features, and predicting the item given it neighbors. Figure 1: A comparison on the high-level frameworks between the classic zero-shot learning in image classification [26] (on the left) and the zero-shot heterogeneous transfer learning in this paper (on the right). it only optimizes for a single task of predicting the item given its neighbors, but utilizing the text features to encode the items. We conduct experiments on one of the world's largest recommender systems from Google. Our proposed methods demonstrate promising results on multiple offline retrieval tasks. In a A/B test live experiment study, where the base is the production search system that is already highly fine tuned, with many components of advanced retrieval techniques (e.g., both term-frequency-based scoring and supervised machine learning algorithms), our proposed method improved multiple evaluation metrics. This shows that even for a system with enough search training data available, ensembling our proposed method can successfully introduce new relevant results that are favored by users. Our contributions in this paper are summarized below: (1) To our best knowledge, this is the first work that studies the problem of cold-starting a production-scale search retrieval system from (item, item) correlations in the recommender system from the same online content platform. (2) We proposed the Zero-Shot Heterogeneous Transfer Learning framework as a solution to cold-start the search retrieval system. (3) We conduct extensive offline and online experiments on one of the world's largest recommender systems, and find that our proposed method 1) when applied alone, can produce accurate retrieval results; 2) when ensembled with supervised methods, can improve the highly optimized search retrieval system. In addition, our findings regarding the effectiveness of our method on broad queries (inferred by query lengths), are valuable insights for practitioners to apply such techniques to real word search systems. RELATED WORK Zero-shot learning and transfer learning. For large-scale label domains, it is common to have labels of instances that have never been seen in the training data. The key idea of Zero-shot learning [29] is to utilize some auxiliary information of the unseen labels, and learn a model to connect the auxiliary information to the input space. By mapping the input feature to the auxiliary information, the zero-shot algorithms are then able to find the corresponding unseen labels. Applications of such idea include object detection of unseen classes [26], semantic image retrieval [16], and more recently recommender systems for new users [14]. Transfer learning seeks to improve a learner from one domain by transferring information from a related domain. Our proposed framework in this paper is a case of heterogeneous transfer learning [20], as the input spaces of the auxiliary and target tasks are different (See Figure 1). Cold-start problem. This mostly refers to modeling new users (no previous interactions with the system) or new items (no records of being consumed by users) in the application of recommender systems. Most methods assume some side feature information of the user or item is available, so that the representation of a new user or new item can be inferred. Such methods include matrix factorization [8,33], pairwise regression [21], decision trees [27] and recently the zero-shot learning approach that uses linear encode and decode structure [14]. Semantic search. It seeks to retrieve relevant items beyond keyword matching. The early effort of Latent Semantic Analysis (LSA) [6] uses a vector space to represent queries and documents. More recently, neural IR models [5,18,19] seek to apply deep learning techniques for building (query, item) scoring models. Supervised models [5,19] are trained with search logs of (query, item) pairs. In contrast, unsupervised models [9] mostly learn the word and item representations purely based on the item's text features. Recent work [31] shares the same motivation that search retrieval can learn from recommender system data. We point out several key differences between our paper and theirs. First being the different data assumptions, as their model is built on the (user, item) data with user embedding optimization. Our work does not require any explicit user data, and therefore, can fit to a wider range of applications. Second being the framework differences, as the recommender system data is not necessarily used for prediction target in our framework. Finally, our study focuses on real world search and recommender system with live experiment, which is not covered by their work. PROBLEM STATEMENT Denote a set of items Y = {y 1 , y 2 , · · · , y n } as the corpus of a search and recommender system. Each item y i has text feature (x (i) , which are from a vocabulary of m words: X = {x 1 , x 2 , · · · , x m }. The size k i of the text feature can vary for different item. Also the text feature can be either ordered (i.e., sequence) or unordered (i.e., set). Finally, we use a binary matrix M ∈ R n×n to represent (item, item) correlations, where M i, j = 1 denotes item y j is a neighbor of y i : j ∈ Ne(i). In this paper, we do not require M to be symmetric. For convenience, we also call the neighbor items as context items. When there is no confusion, we use the terms embedding, vector, and representation interchangeably. Examples of the above problem setting include paper bibliography system, where items are individual papers, with their text features coming from titles, abstract and keywords. The (item, item) correlations can be derived by the citation network, where two papers are correlated if one cites the other. Similarly in an online music streaming system, the correlation matrix M can come from the music co-listening pairs, and the text features are the title, description or tags of the music. The task is to retrieve relevant items given a query represented by a sequence of words x e 1 , x e 2 , · · · , x e p . This would be a traditional supervised learning task if the training data (query, item) were provided. However this data may not be available for newly built systems with limited user search activities. Or the data can be very sparse compared to the total number of queries and items. On the other hand, the (item, item) correlation data commonly exists in most systems [12]. Therefore, we propose to encode the query and item to the same latent vector space by utilizing such (item, item) correlations. Then the search retrieval problem becomes nearest neighbor search [15]. Formally, a model outputs the latent space The query embedding is: where the choice encoder can be (but not limited to) Bag-of-Words (BOW) [17], which computes the mean of the word vectors, or (RNN) [4], or self-attention [28], which models the sequential relations of the words. Then the top-K candidate items are the ones with largest scores to the query: score(q, v), where the score function can be either vector dot product or cosine similarity. ZERO-SHOT HETEROGENEOUS TRANSFER LEARNING FRAMEWORK Before we introduce our proposed Zero-shot Heterogeneous Transfer Learning methods, we note that it is also possible to learn the semantic vector space of words and items by using only the item text features. The idea is to use the item's text feature as a proxy to the query. We can treat this learning task as a multi-class classification task. For instance, when using softmax to represent the probability of item y i given the text feature x e 1 , · · · , x e k : where q = encode(w e 1 , · · · , w e k ). In this case, the total amount of training data equals the total number of items. Consider the special case of a BOW encoder. This method generalizes beyond keyword matching by ensuring that the words with similar co-occurrence patterns will be close in the semantic space. For instance, the words "obama" and "president" are likely to coexist in the same document. As a result, the optimization algorithm will not differentiate these two words much, so that they have similar word vectors. Similarly, items with similar text features will be encoded closely in the semantic space. However, there are many cases that the words with similar semantic meanings do not co-occur very often. For instance, if the items rarely have both "funny" and "prank" as their text features, then the learned vectors of these two words will be very different, and therefore, by searching the query "funny", it is unlikely to retrieve the "prank" items. Fortunately, it is often common for pairs of items to form a connection in search and recommender system. For the above example, if users find these two kinds of items similar, they may link them as correlated items implicitly via interacting with the search and recommender system. We propose to improve the target task of search retrieval, by transferring knowledge from (item, item) correlation, so that the semantic links between related words and items can be discovered. Zero-shot Learning: Multi-task Item Encoder Our first proposed way of transfer learning from (item, item) correlation matrix is to jointly optimize for the following two tasks: The specific model depends on the choice of prediction function and loss function. For instance, cross-entropy loss on a softmax prediction; square loss on a linear dot product prediction; pairwise ranking loss [30] with negative sampling. Here we only present the formulations with cross-entropy and square loss for simplicity. Formally, the probability of predicting item y i given text feature encoder q or given an item y j is as follows: where q i = encoder(w e 1 , w e 2 , · · · , w e p ) is the encoder of the text feature of y i and u j are the context vectors of y j . The goal is to jointly optimize the cross-entropy (CE) loss for the above two tasks: Alternatively when modeling them as regression problem, we can use the following weighted square loss (SL): where ω 0 < 1.0 is the weight for implicit observations [12]. The optimization output are word vectors The context vectors are akin to that in the language modeling (see word2vec [17]). The reason we introduce context vectors instead of using the same item vectors is that, we want to encode two items to a close semantic space, if their neighbors are largely overlapped. By contrast, if we eliminate context vectors all together, and use item : compute loss between the prediction and observed value : similar as above, with one vector being an encoder over associated vectors Figure 2: Illustration on the differences of the three models: Baseline Single Task Learning (STL) (left) that only trains Task 1 of ZSL_ME. Proposed ZSL_ME (middle) that jointly optimize the two tasks. Proposed ZSL_TE (right) that optimize over (item, item) correlation. vectors for both prediction input and target: Pr , this effectively encodes the item close to its neighbors. From a matrix factorization perspective, eliminating context vectors is similar to symmetric matrix factorization on M while M is not necessarily symmetric. We call this framework Zero-shot Learning with Multi-task Item Encoder (ZSL_ME), as items are the shared prediction targets for both tasks. The item vectors v i serve the role of bridging the two optimization tasks. Specifically, vectors v i and v j will be close in the semantic space if their corresponding correlation patterns (i.e., i−th and j−th row of M) are similar. Word vectors on the other hand are indirectly affected to encode such correlation information from M. Consider our previously example of words "funny" and "prank", where they do not co-occur often as the text features from the same item. However if their associated items share similar correlation pattern (i.e., similar neighbors of the items), those items will have similar embedding vectors (due to optimization on Task 2), and therefore the embedding vectors for these two words will also be close (due to optimization on Task 1). Zero-shot Learning: Item-to-Text Transformed Encoder Intuitively, there are two types of data relations, one is between the item and its text features, and other is between the item and its neighbor items (i.e., context items). The above ZSL_ME method treats both relations as prediction targets. In this section, we propose to model the (item, item) correlation as the only prediction task, and utilize the text features to encode context items. Given the text feature (x e 1 , x e 2 , · · · , x e p ) for an item, the context vector of this item is defined by u = encoder(w e 1 , w e 2 , · · · , w e p ). ( So the cross-entropy loss can be computed as: And similarly the weighted square loss is: Since the context vectors are represented by word vectors, the overall output from the above model are just word vectors w i , 1 ≤ i ≤ m and item vectors v i , 1 ≤ i ≤ n. We call this framework Zeroshot Learning with Item-to-Text Transformed Encoder (ZSL_TE), as the optimization on the correlation matrix trains the item and context item representations, which pass to the text representation due to a transformed encoder on text. Here we explain our intuition of this proposed method. Text features of a item are usually derived directly from the title or description of the item, which are precise information about the item. In this case, it is well fit to use them as item encoder instead of prediction target. On the other hand, the correlation matrix M represents how much two items are similar or related content-wise. This relation usually extends beyond the text feature similarities. Therefore they are useful to be the prediction target. To demonstrate how this method generalizes to semantic related words that do not co-occur often, consider the same example of "funny" and "prank". If their associated items (denote y r and y s correspondingly) cooccur as the neighbors of a same item y p , then both u r and u s will be brought closer from the terms v T p · u r and v T p · u r in the loss functions. Since u r and u s are encoded by term "funny" and "prank" correspondingly, these two words end up close in the semantic space. Figure 2 illustrates how each model is different. Model Discussion It is worth mentioning that Neural Collaborative Filtering (NCF) [11] method applies multi-layer perception instead of dot-product as an embedding combiner. We do not compare these two methodologies, as the choice of embedding combiner is not the central importance of this paper. We refer the interested readers to [23] for more details. In the following, we discuss more details on the model options, training algorithms and practical usage examples. Weighted Training. Real word correlation data are usually skewed towards popular items (a.k.a. power law) [7]. In other words, the number of non-zeros (i.e., nnz) of the rows (or columns) in M can be concentrated in only a few rows (or columns). We can reweight the training example to avoid having the loss function dominated by only a few most popular items: where r i are row weights and c j are column weights. In this paper, we set them to be proportional to 1/ √ nnz of the corresponding rows and columns, and rescale them so that the mean values of both row weights and column weights are 1.0. Choice of Encoder. The encoder is applied on the text features of an item during training, as well as on the query words during serving. Depending on the text feature characteristics (e.g., ordered vs unordered, long vs short) and the specific application domain, various encoders can be used. Bag-of-Word (BOW) is simplest to use as it doesn't have underlying requirement on the text feature format. RNN can be used to encode a sequence of words. And selfattention mechanism can usually work well for long sentence or document [4]. In this paper, we use BOW in our experiments for simplicity and also for reducing the complexity in order to work with live experiment serving constraints. Loss Function & Optimization. For large scale search and recommender systems, the number of items can be millions or beyond. This imposes computational challenges when computing the softmax function for L C E or iterating through all negative examples for L S L , which yields O(n 2 ) complexity. Approximation solutions include sampled softmax, hierachical softmax and negative sampling. They share the core idea of avoiding to exhaustively iterate through all items, so that optimization like stochastic gradient descent (SGD) can be applied. In a special case using BOW encoder with square loss L S L , the first-order and second-order derivative of the full loss function can be efficiently computed, without the need of explicitly iterating through O(n 2 ) negative examples [1]. Therefore the optimization problem can be solved by the coordinate descent algorithm. With this advantage of computing the full negative examples without the need to do negative sampling, we therefore choose to use L S L instead of L C E in our experiments. EXPERIMENTS We first conduct a set of offline evaluations (Sections 5.1, 5.2, 5.3) among our proposed methods ZSL_ME and ZSL_TE and baseline methods. Then we conduct the live experiment (Section 5.4) with the best ZSL method on one of the world's largest search and recommender systems from Google. Dataset. Each item y i is a product of the recommender system. We derive the correlation matrix M from sequential item consumptions. Formally, if y p is consumed right after y q by the same user, then y p is the neighbor of y q (i.e., M q,p = 1). To reduce the noise of the correlation matrix, we rank each seed item's neighbors by the counts of their co-occurrences with the seed item, and only keep the top 250 of them. So each row of M has at most 250 non-zeros. Titles and descriptions are used as the text features of each item. The word vocabulary contains both unigrams and bigrams from those text features. We threshold the minimal occurrences for items and words, and the final vocabulary size is 17, 714, 821 for items, 2, 451, 962 for words. The average number of neighbors for an item is 176, and the average number of words of an item is 155. STL SMC ZSL_ME ZSL_TE Recall (%) 17.8 13. 8 27.4 35.0 Experiment Settings. One baseline model is the single task learning version of ZSL_ME, where it only trains on Task 1 that predicts the item given its text features. We call it Single Task Learning (STL). It is important to note that, STL is proven to be a more effective approach than traditional methods like LSI in many of the recommendation tasks [2,12] as it models implicit feedbacks. In addition we also trained a supervised multiclass classification model directly based on the actual search data. Each record (query, item) corresponds to one item consumption from the search query. Formally, the word and item representations are trained to minimize the cross entropy loss of the sampled softmax. Although this supervised method is outside of the zero-shot framework of this paper, it is interesting to compare them and study how they are different. Here are the detailed setting for each method: • STL: single-task square loss L S L without combining (i.e., encoder) text feature; ω 0 = 0.001, λ = 4.0; trained with 10 iterations of coordinate descent. • ZSL_ME: same setting as above multi-task square loss. • ZSL_TE: square loss L S L with the BOW encoder; same setting as the above two. The following sections include the semantic retrieval task: given a seed vector q (e.g., query or item), the task is to retrieve top-K items v with the highest score(q, v). By default (unless otherwise stated) we use cosine score for STL, ZSL_ME and ZSL_TE, and vectors dot product for SMC 1 . Correlation Matrix Reconstruction Task First we evaluate how each method performs in terms of reconstructing the correlation matrix M. We use recall to evaluate percentage of relevant items being retrieved at top positions. This is commonly used for retrieval and top-n item recommendation models. Specifically, for each item y i ∈ Y, denote S t rue,i as the non-zeros in the i−th row of the correlation matrix M and k i as the size of S t rue,i . Denote S pr ed,i as the retrieved top-k i items v ℓ with the highest score(v i , v ℓ ), the recall for item y i is defined as: then we report the average recall over all items. We believe using recall for evaluation is more intuitive than directly comparing the Table 2: Statistics of the evaluation set from human labeled data. Each column represents: the total number of the queries; size of the candidate pool (i.e., the joint of target sets of all queries); average size of target set; average number of appearance for items. square losses of different methods, because the square loss greatly depends on the hyperparameters, and the value itself does not reflect any application meaning. Results. Table 1 shows the recalls of all methods. We find the two proposed transfer learning methods ZSL_ME and ZSL_TE outperform STL and SMC. This is expected, as STL and SMC are trained without any information of the (item, item) correlation data. This observation shows that the outputs from our proposed framework are indeed influenced by the recommender system data in a positive way. Finally, we notice that between the two proposed methods, ZSL_TE is superior than ZSL_ME. We also observe this pattern for the other tasks as well in this paper. Our hypothesis is that directly optimizing two tasks in ZSL_ME could potentially introduce conflicts [24], while ZSL_TE does not have this issue. Offline Retrieval Task on Human Labeled Data In this section, we evaluate how relevant are the retrieval results when using human labeled (query, item) ground truth pairs. We have a list of queries and their corresponding items that are labeled as relevant by human. We then form a candidate pool as the union of all relevant items from these queries, and a target set for each query as its relevant items. Formally, consider r different queries, and each query corresponds to a set of relevant items denoted as S 1 = {y (1) We can denote their joint set as S = S 1 ∪ S 2 ∪ · · · ∪ S r . Given a query (for instance the i−th one), we select k i items from S with the highest score to the query, and denote it as Then the recall for this query is defined as: Intuitively, the above recall denotes the percentage of relevant k items are among the top-k predictions. We note that it is possible for certain items to exist in multiple the ground truth set S i , i = 1, · · · , r (i.e., they are not disjoint). This actually makes the task more difficult, because we select the group of queries to share some common attributes, so that for each query, the unrelated items (i.e., related to other queries) are not too distinct. We can define the average number of appearance for items as r i=1 |S i |/|S| in order to show the degree of overlapping on these sets. We have four such datasets, with each representing one category of the queries. For instance, one dataset has all its queries and items being music related. See Table 2 for their statistics. Results. Figure 3 shows the comparison results for all methods on this task. We can see that our proposed transfer learning approaches ZSL_ME and ZSL_TE outperforms STL. This shows incorporating the (item, item) information from the recommender system can improve the semantic search retrieval task. Consistently, between these two transfer learning approaches, ZSL_TE performances much better. We also notice that ZSL_TE even achieve higher recall than the supervised method SMC. The explanation is that, although the supervised method is directly trained on (query, item) data, but the evaluation sets are extracted from human labeled data, which encodes only the relevance information between query and item. By comparison, the actual user search data (query, item) is affected by popularity of the item or content of the item. Because for a large scale recommender system, hundreds of thousands of items can be relevant to a query, but only those that are appealing enough will be clicked by users. We will later discuss evaluation results in Section 5.3 on the actual search data. Offline Retrieval Task on Search Data In this section, we use the ground-truth pairs (query, item) from the search logs. Note that this is the same data source as the training data of SMC. We hold out 1 million such pairs for evaluation. Different from the human labeled data, the search data reflects not only relevance, but also users' preferences. For large-scale search and recommender system, certain items can be very relevant to a query, but may not necessarily be preferred by users. We use the metric recall@K for evaluation, which is defined as the ratio of the ground-truth items at the top-K retrieved list of the method. Results. Figure 4 shows the comparison results of different methods for this task. SMC is not presented in the figure as it is the supervised method that is trained on the exact same data source of the evaluation task, so it is expected to outperforms all the unsupervised methods by large margins. In our case, SMC could reach 73.6% for recall@300. We can clearly notice that our proposed ZSL_TE achieves superior recalls compared to ZSL_ME and STL. So far, all the offline evaluations show that ZSL_TE performs consistently better than ZSL_ME, and in this task, ZSL_ME is even slightly worse than STL. Our hypothesis is the same as stated in section 5.1 that multitask optimization could introduce additional conflicts, which is commonly observed in related researches [24]. We also notice that, if we change the SMC retrieval method from vector dot Recall@K (%) STL SMC(cosine) ZSL_ME ZSL_TE ZSL_TE(rescale) Figure 4: Recall@K for the evaluation task on search data. Dotted lines are two added methods used for demonstration purpose. SMC(cosine) is defined as using the SMC method for retrieval with cosine similarly instead of dot product as score function. ZSL_TE(rescale) is defined as rescaling the item vectors of ZSL_TE method by borrowing the norms of item vectors learned in SMC, and the retrieval score function is dot product. product to cosine (as is used by all other methods), the recalls are beat by our ZSL_TE. This provides us an interesting insight that the vector norms in the SMC method yield useful information about user preferences over items. To verify this hypothesis, we rescaled the item vectors of ZSL_TE based on the norms of SMC, and find out the recall is improved. 2 In practice, our proposed methods are not meant to replace the existing supervised models, but to serve as additional sources of candidates generation. Here we show that combining our method to SMC can improve the recall, even with the following trivial ensemble rule. The new item list has the first 150 items the same as SMC, then the next 150 items are from ZSL_TE and the rest of SMC in a interleaved way. Table 3 shows the comparison results. It is also important to note that the benefits of ensembling our ZSL_TE with supervised methods can be more pronounced, because the current evaluation is done on the offline search data, which is already limited in scope as it does not answer the question of what if we recommend the user the other item instead of the current one. This effect can only be assessed from the live experiment (Section 5.4). Live Experiment Settings. In this section, we evaluate ZSL_TE in an A/B live experiment. The control group is the production search system that is highly-optimized with many components of advanced retrieval techniques (e.g., both term-frequency-based scoring and supervised 2 The same observation holds also for ZSL_ME and STL. They are skipped in Figure 4 for the sake of simplicity. machine learning algorithms). The experiment group is an ensemble of our ZSL_TE and the production system. Each time when a query comes, the ZSL_TE retrieves top-100 items, and the ensemble algorithm will jointly rank (i.e., based on query features as well as item features) these items with those retrieved by the control production system. Model Daily Refresh. We do warm-start training with the new data everyday to include new items and words to the model. Specifically, we load the existing model and only need to conduct a few model training iterations to make the embeddings of new items and words converge. Evaluations. We conduct this live experiment over several millions of queries. We compare the following evaluation metrics: • Query Coverage: The ratio of queries that receive at least one user interaction. • User Interaction: The level of interactions between users and the items. • Query Refinement: The proportion of queries that have a follow up query sharing a keyword. Smaller value is better (meaning users are more satisfied with the results). • Next-page CTR: The proportion of queries that lead to a next page click by users. Smaller value is better (meaning users are satisfied with the first page of results). • Human-rated Relevance Score: A numerical score representing the relevance of the retrieved items to a query, as evaluated by trained human raters. We randomly sample 250 instances, each with a query and a list of retrieved items, for both the control group and the experiment group. We also have several ranking metrics that are similar to the normalized discounted cumulative gain (nDCG), but they are tailored to our specific system, therefore less generic. We do not include them in this paper to avoid confusion. But it is important to mention that our live experiment also shows significant improvements on these metrics as well. Results. As shown in Table 4, we observe significant improvements including increased query coverage, decreased query refinements (meaning users are more satisfied with the results), increased user interaction, and higher relevance scores from human raters. These results demonstrate that the proposed approach is effective on semantic search task, even without training on any search data. We also notice that the our method has a larger improvement when the query length (number of unigrams) is small (see Figure 5). Since shorter queries often correspond to broader user intent, there are usually more relevant items per query, which means more (query, item) pairs were previously unseen in the supervised training data. This is an evidence that, our zero-shot transfer learning
2020-08-10T01:00:28.619Z
2020-08-07T00:00:00.000
{ "year": 2020, "sha1": "334bf07262320eb895a22973c948b4111e782daa", "oa_license": null, "oa_url": "https://dl.acm.org/doi/pdf/10.1145/3340531.3412752", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "334bf07262320eb895a22973c948b4111e782daa", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
265044456
pes2o/s2orc
v3-fos-license
Confounding Fuels Misinterpretation in Human Genetics The scientific literature has seen a resurgence of interest in genetic influences on human behavior and socioeconomic outcomes. Such studies face the central difficulty of distinguishing possible causal influences, in particular genetic and non-genetic ones. When confounding between possible influences is not rigorously addressed, it invites over- and misinterpretation of data. We illustrate the breadth of this problem through a discussion of the literature and a reanalysis of two examples. Clark (2023) suggested that patterns of similarity in social status between relatives indicate that social status is largely determined by one’s DNA. We show that the paper’s conclusions are based on the conflation of genetic and non-genetic transmission, such as wealth, within families. Song & Zhang (2024) posited that genetic variants underlying bisexual behavior are maintained in the population because they also affect risk-taking behavior, thereby conferring an evolutionary fitness advantage through increased sexual promiscuity. In this case, too, we show that possible explanations cannot be distinguished, but only one is chosen and presented as a conclusion. We discuss how issues of confounding apply more broadly to studies that claim to establish genetic underpinnings to human behavior and societal outcomes. Introduction People vary remarkably in behavior and social outcomes.This variation sparks curiosity about its causes, and for the past 150 years, scholars have debated the extent to which it arises due to underlying genetic differences.In the 19th century, Galton (1) found strong resemblance between parents and their offspring in measures of social status and on that basis inferred that genetics is the most likely root cause, a school of thought described broadly as "hereditarianism" (see 2).As is now well appreciated, Galton's inference dismissed the fact that parents transmit not only genetic material to their offspring, but also wealth, place of residence, knowledge, religion, culture, and more.For such attributes, transmission within families can parallel genetic transmission (Fig. 1a) (3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16)(17)(18)(19)(20), often leading genetic and non-genetic transmission to be indistinguishable in observational data.A long history of scholarship has highlighted this type of confounding and how it impedes inference of the causes of phenotypic variation, especially when molecular genetic data are unavailable (21)(22)(23)(24)(25)(26)(27)(28). Here, we demonstrate how confounding is frequently overlooked or downplayed in contemporary reports about genetic causes of human behavior and socioeconomic outcomes.We begin with a reanalysis of data from a recent publication that made claims about genetic determinism of social status (29).We then discuss how confounding can pervasively impact inferences based on genome wide association studies (GWAS) for behavior and social outcomes.Lastly, we illustrate the impacts of a broader category of confounding and errors in causal inference, stemming from data preparation and other analysis choices, in a recent study (30) that purported to explain the evolutionary maintenance of genetic variation affecting bisexual behavior. Confounding fuels hereditarian fallacies A recent publication (29) analyzed familial correlations in a dataset of socioeconomic measures (e.g., occupational status, house value, literacy) from a selection of records spanning the 18th to 21st centuries in England.(29) fits a quantitative genetic model to these observed correlations [(31, 32); Supplementary Note 1].Based on this fit, (29) infers that social status persists intergenerationally because of strong assortative mating on a status-determining genotype (or "social genotype" as the author has used in previous work (33)).Further, the paper argues that because mates share the genes underlying social status to such a high degree, the persistence For a given genetic variant, individuals with purple alleles will tend to be taller than those with orange alleles, regardless of the variant's causal effect on height.This confounding affects any variants that reflect this axis of genetic population structure-typically many millions of variants.While researchers often use methods that adjust for population structure in an attempt to avoid spurious associations, the extent of residual confounding in GWAS remains unclear. of social status within families-and persistence of differences in status among families-have been largely unaffected by changes in social policy in the last four centuries.In a subsequent commentary about this work (34), the author presents the results of (29) as providing strong support for a hereditarian interpretation.In doing so, he appeals to the metaphor of a "genetic lottery" underlying social outcomes. Here, we discuss the failure to account for the confounding of genetic and non-genetic transmission (Fig. 1a) that, together with other core flaws of the analysis (Fig. 2a-b), fuels the hereditarian claims in (29) (see our discussion of other misinterpretations, errors, and incongruencies in (29) in Supplementary Notes 2-7; Tables S1-S3; Figs.S1-S13).We also demonstrate that familial status correlations varied substantially over the time period examined, generally decreasing (Fig. 2c).This finding contrasts with the paper's conclusion, based on the same data, that social mobility has been stagnant.As we show below, the analyses in (29) do not establish the contribution of genetics to social status. Confounding between genetic and non-genetic transmission.Inferences in (29) are based on a linear regression model derived from quantitative-genetic theory developed by R.A. Fisher (31,32) (Supplementary Note 1) and the model where an individual's phenotype, , is the sum of separable genotypic () and environmental () influences on it.Since genotypes are transmitted from parents to offspring, genetic parameters can be inferred from correlations between relatives, so long as environmental influences are independent and random with respect to genotypes.Fisher (1918) formally showed that under this model, the expected correlation in a trait between pairs of individuals of a defined relationship is a function of the genealogical relationship between the relatives, the trait's heritability (ℎ 2 ), and the extent of assortative mating in the population ().(ℎ 2 is the fraction of phenotypic variance due to additive genetic variance, commonly referred to as "narrow-sense" heritability.) Crucially, to interpret the model parameters ℎ 2 and as relating to genetic effects, Fisher's model assumes that there are no non-genetic (material, environmental, or cultural) influences on a trait that are systematically shared or transmitted between relatives.This assumption is valid in an experimental setting, for instance, in which genotypes are randomized with regard to environment. However, in humans that assumption is nonsensical.Non-genetic transmission is ubiquitous for social and behavioral traits.Traits may be transmitted directly between relatives (e.g., literate parents teaching their children how to read) (5), or via indirect mechanisms such as "ecological inheritance," where the trait value of an offspring is influenced by the environmental conditions bestowed by their parents (e.g., familial wealth influencing educational opportunities) (8,35). When genotypes cannot be randomized over environments, true genetic effects are much more difficult to separate from other factors underlying phenotypic resemblance between relatives (22). In (29), for instance, the assumption of no systematic non-genetic transmission implies that similarity in house value among relatives (one of the measures of social status analyzed) is solely due to shared genes, and does not arise from similarity in parental wealth, the inheritance of wealth or property, or having learned from one's relatives about investment. In fact, we found signals of strong confounding between genetic and non-genetic contributions to familial resemblance in the data used in (29).The paper acknowledges the inheritance of material wealth from one's parents as an example of non-genetic transmission only when treating wealth itself as the focal status measure.For other measures studied, the effect of familial wealth on social status is ignored.Yet familial wealth can obviously influence a wide range of conditions that affect offspring (e.g., healthcare, place of residence, access to tutors, social circles, etc.) (36)(37)(38)(39)(40). Consistent with this intuition, we found that all seven status measures analyzed in ( 29) are substantially correlated with an individual's father's wealth (Pearson r ranging from 0.19 -0.66; mean r = 0.36; all < 2 × 10 −16 ; Table S2; Fig. 2a).Closer relatives tend to have more similar paternal wealth, and the similarity in paternal wealth between relatives predicts their similarity in occupational status extremely well (Pearson r = 0.91; Fig. 2a; inset).Thus, there is clear confounding in these data between transmission of genes and the effects of parental wealth on familial similarity in social status.Apart from wealth, numerous other non-genetic factors may contribute to familial correlations (41,42).(29) presents two post hoc analyses in an attempt to rule out non-genetic contributors to familial resemblance in social status.In Supplementary Note 4, we detail why these analyses are uninformative as to the strength of non-genetic effects on resemblance in social status between relatives (Fig. S1). The confounding of genetic and non-genetic transmission in these data invalidates the interpretation of the model parameters offered in (29) as pointing to identifiable genetic contributions (Supplementary Note 1).In particular, in the presence of such confounding, the interpretation of G and E in Eq. 1 as transmissible genetic (heritable) and random non-genetic effects on a phenotype, respectively, no longer holds.Instead, they can be interpreted as a transmissible component and a random, non-transmissible component.Consequently, the parameter interpreted in (29) as narrow-sense heritability, ℎ 2 , is in fact an estimate of the "total transmissibility" of a trait, 2 , the proportion of trait variance attributable to an unknown compound of transmissible influences on the traits, including genes, culture, wealth, environment, etc. (10,12).The second key parameter, , which (29) interpreted as the "spousal correlation in the underlying genetics," does not represent a genetic correlation between mates.It is instead the spousal correlation in the transmissible component of the trait. is derived from the "intergenerational persistence rate," = 1+ 2 , estimated from the regression model.The expected correlation for a given kinship pair is equal to 2 , where denotes genealogical distance (Fig. 1a).(Note that the parameterization of for father-son and grandparent-grandchild relationships also depends on the degree of assortative mating with respect to the focal trait itself; see Supplementary Note 1.) The conflation of genetic and non-genetic transmission helps to explain why the model parameters estimated in (29), which are claimed to represent quantitative genetic parameters, ℎ 2 and , are much higher than estimates of these same parameters from studies that attempt to account for confounding (e.g.(19,43,44)). Conclusions in (29) about the insensitivity of social standing to policy and sociopolitical context rest on the similarity of estimates of the parameter across status measures and across time.(29) argues that this stability is due to strong assortative mating on a genetic factor for "social ability".However, given that both genetic and non-genetic factors are transmitted within families, it follows that tells us nothing about genetic versus non-genetic contributions to assortment, and tells us nothing about the cause of within-family persistence of social status (Fig. 1a; Supplementary Note 1). Regardless of whether due to genetic causes or not, a striking report of ( 29) is that "The vast social changes in England since the Industrial Revolution, including mass public schooling, have not increased, in any way, underlying rates of social mobility".In point of fact, we found that the estimates of familial correlations in (29), and, in turn, estimates of the persistence rate, are heavily affected by statistical artifacts (Fig. 2b; Supplementary Note 6).Furthermore, we show that across status measures, parent-offspring correlationsan established measure of social mobility (45,46) generally decrease over time (Fig. 2c; Supplementary Note 7).How could the new measure, "persistence rate", used by ( 29) lead to such contrasting conclusions?(29) offers neither justification for why this measure reflects social mobility, nor explanation for the discrepancies with established measures of mobility used in other literature (e.g., 47) and applied to the same data. Some readers have already taken arguments in (29) as compelling evidence that social status is largely caused by genetic factors (48)(49)(50)(51)(52). Yet the assumptions and interpretations in (29) ignore a century of quantitative-genetic theory, previous empirical evidence for confounding, and the fallacies that arise when confounding is ignored (13,17,21,22,26,42,(53)(54)(55)(56)(57)(58)(59)(60), as well as patterns in the paper's own data that conflict with the interpretations presented.In this regard, we emphasize that (29) does not merely overstate the findings: the model parameters are misconstrued and the pervasive confounding of genetic and non-genetic transmission not addressed. Are modern genomic studies less susceptible to confounding? In relying solely on observational phenotypic data and assuming that transmission in families is solely genetic, ( 29) is similar in spirit to studies carried out by Francis Galton a century and a half ago.One might hope that the inferential flaws described above are addressed in studies that use large genomic datasets and employ state-of-the-art statistical methods to adjust for confounding. As we outline, however, the same concerns remain broadly applicable, as confounding is still poorly understood and often underplayed in the literature. Confounding in genomic studies is poorly understood.Human geneticists have long appreciated that there are myriad ways by which a genetic variant may be associated with a trait or outcome (26,61,62).A key example is "population stratification" in genomic data (e.g., in GWAS) wherein patterns of genetic similarity in a sample are correlated with the phenotype studied (Fig. 1b).Possible reasons for this correlation include social, environmental, or genetic factors, contemporary and historical.Typically, the specific causes are unknown.These same axes of genetic similarity ("population structure") are reflected in the frequencies of numerous genetic markers that may be tested for association with a trait in a GWAS.Consequently, any such markers will tend to be correlated with the trait, even if only a subset (or in fact none) of the variants causally affect it (Fig. 1b). Consider, for example, a GWAS aimed to identify genetic risk factors for asthma in a sample of people from the US of either primarily European American genetic ancestry or African American genetic ancestry.There are many millions of variants in the genome that significantly differ in frequency between these groups.At the same time, African Americans in the US are systematically exposed to higher levels of air pollution (63), an environmental risk factor for asthma.If confounding is not adequately addressed, the GWAS would then lead us to conclude erroneously that "African American genetics" predispose one for asthma.More generally, the contributors to the correlation between axes of population structure and a phenotype may be partly or entirely genetic.Regardless, they will drive confounded associations in numerous genetic markers that tag these axes of population structure (Fig. 1b). Human geneticists use various methods to adjust for confounded associations.However, confounding may persist, despite application of these methods (residual confounding).In 2019, we and other researchers discovered that genetic effect estimates in the largest GWASs for height-the most extensively studied polygenic trait of humans-were biased due to residual confounding (58,59).It became clear that the bias for each individual genetic variant was slight, but it was systematic across variants.Consequently, when researchers summed over signals from many genetic variants, they also summed over systematic biases.This led to erroneous conclusions in many studies (as detailed in 17,58,59).Further research has demonstrated that residual confounding may affect many GWASs, in particular for social outcomes and traits that are heavily influenced by social context (26,60,62,(64)(65)(66). Confounding in genomic studies is downplayed.Studies often imply that confounding is completely remedied by current methods, despite ample evidence to the contrary (58-60, 62, 64, 66-75).Sometimes, methods to estimate genetic parameters grow in popularity even after they are shown to be susceptible to confounding, with this susceptibility rarely mentioned as a caveat (see, e.g., discussions in 58, 65,66,70,76,77).In other cases, confounding is acknowledged as a potential limitation, but its impact on the reported results (and their interpretation) is downplayed or obscured (see, for instance, (78, 79)). As one example, consider the reporting of evidence for genetic effects from standard GWASs versus family-based studies.Family studies identify genotype-trait associations within, instead of among, families.This approach greatly mitigates many sources of confounding (60,62,80). Reporting practices tend to downplay this point by instead emphasizing that there exists a true genetic effect based on evidence from family studies, while continuing to rely on the magnitude of those effects estimated in a standard GWAS (55).Such reporting choices mislead by presenting signals susceptible to confounding as measures of genetic causality. Confounding in complex traits: death by a thousand cuts.Quantitative geneticists acknowledge residual confounding as an unsolved problem.But in practice, researchers face incentives to publish their inferences of genetic associations that are vulnerable to confounding. In the case of polygenic (or "complex") traits, the usual focus of these studies, genetic contributions to trait variation are largely due to numerous genetic variants with individually small effects.Researchers often wish to leverage weaker and weaker genetic associations to capture these highly polygenic signals.At the same time, confounding tends to be aggravated as more weakly associated variants are considered (60,74).Thus, in the pursuit of understanding polygenic effects, researchers may face a tradeoff between explaining a smaller part of the phenomenon under study in a causally rigorous way, versus accounting for a seemingly larger part, at the price of unknown biases introduced by confounding. An example of this tradeoff lies in genetic trait prediction with so-called "polygenic scores" (85). Polygenic scores based on more variants, including weakly-associated ones, may be preferred by researchers because they often attain higher prediction accuracy than polygenic scores that are limited to confident associations.However, polygenic scores that include many weaklyassociated variants are plausibly more susceptible to underappreciated axes of confounding (60,74,86).Subsequent "consumers", including clinicians, researchers, policymakers, and the general public, may then assume these polygenic scores capture strictly direct genetic effects; the possibility of confounding is rarely acknowledged.Consider, for instance, an hypothetical preimplantation genetic testing using a polygenic score based on the asthma GWAS we described above.In this extreme, embryos would mistakenly be prioritized for implantation according to whether or not they share genetic variants with people exposed to higher levels of air pollution in a previous generation. Similarly, popular methods to estimate genetic correlations (the correlation between two groups of individuals in genetic effects on a trait, or the correlation in genetic effects on two traits) often indiscriminately aggregate across genome-wide associations (87,88).Such methods are useful for characterizing how the genetic bases of complex traits are intertwined.However, they may inadvertently mask (or even introduce) additional axes of confounding (e.g., confounding that is shared between groups or traits) (60,70,76,89,90) and their uses in causal inference remain controversial (70,91).Yet when a study reports conclusions based on genetic correlations, it is likely to be interpreted-particularly by non-experts-as unambiguously reflecting genetic causality. Confounding and further pitfalls in causal inference Such unknown axes of confounding are plausibly a concern in a recent study that, based on an analysis of genetic correlations, purported to resolve an evolutionary paradox, why alleles associated with same-sex sexual behavior are maintained, despite being "reproductively disadvantageous" (30).In what follows, however, we focus on other forms of confounding and errors in causal inference in (30) confusing model assumptions with evidence, ignoring the compatibility of data with confounded explanations, and the introduction of confounding through researchers' analysis choices.We posit that, while these problems are not unique to genomic studies, they can evade attention when couched in reports about how behaviors and outcomes are genetically correlated. Confusion of assumptions with evidence.(30) defined a measure of bisexual behavior based on questionnaire data about total lifetime number of sexual partners and same-sex sexual partners (hereafter, we refer to this measure as BSB; see (92) and (93) for discussion of shortcomings of such measures).( 30) reports a significant positive genetic correlation between BSB in males and the number of children.But when adjusting this genetic correlation for genetic correlations of each measure with self-assessment as a "risk-taker", the adjusted (or "partial") genetic correlation between BSB in males and number of children was statistically indistinguishable from zero (Fig. 3a).(30) interpret this finding as evidence that "the current genetic maintenance of male BSB is a by-product of selection for male risk-taking behavior."(30) does not explain the hypothesized mechanism by which risk-taking behavior increases the number of offspring, but in subsequent news coverage, one of the authors is quoted as stating that, "self-reported risk-taking [likely] includes unprotected sex and promiscuity, which could result in more children" (94). The study presents the contrast between these unadjusted and partial genetic correlations as support for a causal claim.However, the causal model is assumed a priori, and no evidence supporting this model is provided.Even under the assumption that the three measures considered are the only ones at play-and some causally affect others-the evidence is equally consistent with contradictory causal hypotheses (e.g., different directions of causality, arrows in Fig. 2b of (30) and Figs.3a, S14 here; Supplementary Note 8; c.f. ( 95)). Furthermore, the study did not evaluate the support for any alternative model involving other factors, observed or latent.The authors justify their focus on risk-taking as a mechanistic explanation for the genetic maintenance of same-sex sexual behavior by citing previous reports of genetic correlations of same-sex sexual behavior and risk-taking (96,97).However, these studies (and others ( 98)) reported multiple measures with similar (or even stronger) genetic correlations with same-sex sexual behavior than risk-taking (96) (Supplementary Table 5). Sexual behavior aside, (30) neither cite nor offer any evidence for the alleles associated with risktaking being maintained over long evolutionary timescales.Additionally, this association is based on an answer to a single questionnaire question, "Would you describe yourself as someone who takes risks?"(99).It is possible that responses to this question reflect a tendency towards practicing unprotected sex and promiscuity, and that they simultaneously correlate with risk-taking tendencies that have been relevant for fitness throughout recent human evolution and across evolving societies; but, as acknowledged in (30), these key assumptions are hard to evaluate.For these reasons, we asked: Is there unique support for the assumed mechanistic model, in particular the role of risk-taking behavior as a mediator?To answer this question, we considered models wherein a measure other than risk-taking mediates the genetic correlation between BSB and number of children ("Measure X" in blue in Fig. 3b).If adjusting for this measure also results in a partial genetic correlation between BSB and number of children that is not significantly different from 0, the data are equally compatible with the hypothesis that there is a reproductive advantage for BSB-affecting alleles because of their simultaneous effect on Measure X.We implemented this strategy with 18 measures, selected based on prior evidence of high genetic correlations with same-sex sexual behavior, risk-taking behavior, and/or number of children (98) (Supplementary Tables 4-5; Supplementary Note 8).All but two of these models yielded a partial genetic correlation between BSB and number of children that was not significantly different from 0 (Genomic SEM (100) P-value >0.05 before applying any correction for multiple testing) (Fig. 3b).Hence, other causal narratives that do not involve risk taking could just as easily be constructed: the data are equally consistent with the hypothesis that genetic variants driving BSB are maintained through evolution as a byproduct of selection on the number of falls in the last year, weekly usage of cell phone, or any of these measures (Fig. 3b; Fig. S15). Confounding introduced by researchers' analysis choices.Measures relating to having experienced some form of nonconsensual sex ("victim of sexual assault" and "first had sex before age 13") exhibited some of the strongest genetic correlations with BSB in males (Fig. S15).This observation led us to be concerned about the ascertainment choices made in (30).Indeed, we found that classification as a BSB individual is highly enriched among males who reported having first had sex before age 13 (in this regard, we note that children under this age are not legally capable of consenting to any sexual activity in the UK ( 101)) (Fig. 3c).Whereas 2.2% of males in the sample considered were classified as "BSB individuals," this classification rate increased to 9.8% among those who reported first having sex between ages 10-12 (inclusively), and 25% among those who reported first having had sex before age 10.Though we do not know, for this dataset, the age at which males classified as BSB first had same-sex sexual intercourse, or what fraction of victims of sexual assault had a perpetrator of the same sex, the majority of reported sexual assaults on prepubescent male victims are carried out by male perpetrators (102)(103)(104)(105). This aggravates the concern that the BSB classification used in (30) conflated voluntary sexual behavior and sexual assault, undermining the study's stated aim of advancing our understanding of human sexual preferences.Taken together, our reanalysis of ( 30) cautions yet again against causal inference based on preferential attention towards sensational hypotheses and analyses that seemingly support them. Conclusion. The study of the genetics underlying human behavior and social outcomes, with its fraught history and heightened potential for misinterpretation and misappropriation (55,56,65,92,106,107), demands the utmost rigor.The failure to reckon with confounding fuels misinterpretation of genetics research and impedes scientific progress.We are therefore concerned that a publishing culture that rewards sensationalism may instead promote a decline in standards (108,109).In that respect, everyone has a role to play: it is crucial that researchers, reviewers, and editors uphold the highest standards in their handling of these complex, farreaching issues. Figure 1 . Figure 1.Confounding between genetic and non-genetic factors influencing traits.(a) Confounding within families.Non-genetic transmission can parallel genetic transmission, and their respective effects are confounded in observational data.Illustrated is a model where a trait value is the sum of an inherited component from parents and random noise.Under this model, the expected resemblance between relatives depends on transmissibility (t 2 , the portion of trait variation attributable to the transmitted component) and a rate of decay across genealogical distance (the "persistence rate," b, which increases with increasing degree of assortative mating).Ignoring the confounding of genetic and non-genetic transmission in the data, Clark (2023) misassigns all transmission as genetic heritability and all assortative mating to be on a latent "social genotype".(b) Confounding among families induces biases in GWAS."Population structure confounding" in genomic data relates to correlations between the structure of genetic relatedness in a GWAS sample (exemplified by the orange-to-purple gradient) and the phenotype studied.Here we show genetic sequences from individuals 1-4 at top left, with their attendant phenotypes (height) at top right.For a given genetic variant, individuals with purple alleles will tend to be taller than those with orange Figure 2 . Figure 2. Reanalyses of data from Clark (2023) challenge the paper's claims.(a) Example of confounding between genetic and non-genetic transmission.Relationships between social status measures and paternal wealth suggest at least one potential source of confounding between genetic and non-genetic transmission.Across relative pairs, correlation in occupational status is highly correlated (Pearson's r = 0.91) with those relatives' correlation in paternal wealth.Inset shows that individual occupational status is strongly correlated (Pearson's r = 0.72) with father's wealth.Plots show data for 13,030 individuals born 1780-1859 and their fathers.((29) estimated wealth from probate records.The log of estimated wealth was meancentered with respect to 5-year bin means.Individuals not probated due to insufficient wealth were assigned a value of half the minimum probate requirement for the time period.)(b) Pseudoreplication distorted estimates of familial correlations.Familial correlations (95% CI) in occupational status (1780-1859) using the approach employed by (29) (in gold) involved pervasive, non-uniform pseudoreplication (Supplementary Note 6).For example, the (1780-1859) occupational status correlation for fourth cousins is calculated from 17,382 pairs, derived from only 1,878 unique individuals.In teal we show conservative estimates using only a single relative pair per surname [means and 95% CI over 1000 bootstrap samples are plotted for each familial correlation], which are therefore not susceptible to pseudoreplication.Distant cousins show dramatically higher correlations after adjusting for pseudoreplication.(c) Signals of change in social mobility.Parent-offspring correlations in multiple status measures generally decrease over time in (29)'s data, in contrast to claims of stagnant social mobility made in the original paper.To mitigate pseudoreplication, we calculated correlations using one pair from each surname [as in (b)].Shown are average correlations (95% CI) across 500 bootstrap iterations of correlation estimation.Fig. S13 shows two complementary analyses estimating correlations either without accounting for pseudoreplication, or using percentile ranks-both result in similar trends. Figure 3 . Figure 3. (a) Song and Zhang (2024) show that the estimated genetic correlation between BSB (a measure of bisexual behavior) in males and number of children is significantly different from zero (left diagram).They hypothesized the causal structure shown in the right diagram: Genetic variants affecting BSB affect the number of children only through their simultaneous effect on risktaking behavior.When adjusting for genetic effects on risk-taking behavior, the residualized (or "partial") genetic correlation between BSB and number of children is no longer significantly nonzero.They take this observation as evidence for their hypothesis.(b) However, when we repeat this analysis but replace risk taking with a variety of other measures (blue in causal diagram), 16/18 measures yield a partial genetic correlation between BSB in males and number of children that is consistent with zero (measures we considered are shown in blue in the plot on the right; error bars indicate 95% confidence intervals).Asterisks indicate the four measures without a significant partial genetic correlation with number of children (Fig S16).(c) Male participants in the study sample who reported having first had sex before age 13 (including victims of childhood sexual assault, many of which would have had same-sex perpetrators) are likelier to be classified as BSB by the criteria used in Song and Zhang (2024).N, total number of males in each group.
2023-11-08T14:14:45.286Z
2023-11-03T00:00:00.000
{ "year": 2024, "sha1": "5fe2cd1f3a4e33b320aa36ddc94766aab5315635", "oa_license": "CCBYNCND", "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10635045", "oa_status": "GREEN", "pdf_src": "BioRxiv", "pdf_hash": "5fe2cd1f3a4e33b320aa36ddc94766aab5315635", "s2fieldsofstudy": [ "Sociology", "History" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
257282198
pes2o/s2orc
v3-fos-license
Psychopathological Differences Between Self-Injurious Behaviors and Suicide Attempts in Adolescents Objective: Suicidal attempts and self-injurious behavior are major public health concerns, and they are strong predictors of death in youths worldwide. Given the risk of death, there is an urgent need to understand the differences and identify effective interventions. This study aimed to investigate the relationship between the predictors associated with non-suicidal self-injury and suicide attempts among adolescents. Materials and Methods: The study recruited a total of 61 adolescents aged 12-18 years, with suicide attempts (n = 32) and non-suicidal self-injury (n = 29). Turgay Disruptive Behavioral Disorders Screening and Rating Scale-Parent form, Rosenberg Self-esteem Scale, and Beck Anxiety and Beck Depression Inventory assessment scales were applied. All participants were interviewed with the structured clinical interview for Diagnostic and Statistical Manual of Mental Disorders, fourth edition. Results: The adolescents with the suicide attempts were found to have lower self-esteem, higher depression, inattention and hyperactivity-impulsivity scores than the group with non-suicidal self-injury. Higher inattention scores and rural residency were positively and significantly associated with suicide attempts, adjusting for other discrimination types (odds ratio = 1.250, 95% CI = 1.024-1.526; odds ratio = 4.656, 95% CI = 1.157-18.735). Conclusion: This study shows that some clinical psychiatric factors may be helpful in distinguishing adolescents with suicide attempts from adolescents with non-suicidal self-injury. Future research is needed to determine the predictive role of these variables in distinguishing suicidal attempts from self-injurious behavior. Introduction Self-injurious behavior, whether suicidal or not, is a serious public health problem affecting adolescents and young adults globally. 1 Mechanisms associated with self-regulation such as coping (cognitive and behavioral response processes) and emotion regulation (emotional response processes) are thought to underlie this behavior among adolescents. 2 Frequent or various types of injuries are associated with more suicidal behavior compared with infrequent and less varied types of injuries. 3 Injury behaviors can be divided into intentional and unintentional injuries. Intentional injuries are divided into 3 groups as suicide, non-suicidal self-injury (NSSI), or violent attacks. 4 Although research supports the distinction between suicide attempts (SAs) and NSSI, the overlap between the 2 phenomena has been identified in clinical populations in up to 70%. 5 Both phenomena were found to be associated with high levels of depression, suicidal ideation, and hopelessness. In addition, those who attempted suicide had higher scores on measures of anxiety, depression, and suicidal ideation than those with NSSI. 6 Also, a suicidal attempt has more serious consequences than NSSI, and the risk of suicide for adolescents with NSSI is also considerably higher. Being able to identify the differences between the 2 phenomena might help the clinician to define adolescents who would attempt suicide. Suicide, one of the leading causes of death worldwide, has even more worrying consequences for young people. It is estimated to be the second cause of death among young people aged 10-24. 7 Suicide risk has been associated with sociodemographic variables such as gender, age, marital status, economic status, and educational status. 8 Other clinical features associated with SAs are as follows: tobacco and alcohol use, exposure to traumatic stressful events, such as abuse; physical illness, somatic symptoms, anxiety; some psychological factors, such as hopelessness, impulsivity, low self-esteem, loneliness, anger, and appetite loss. [8][9][10][11][12] Adolescence is a risky period in terms of selfinjurious behavior due to difficulties in coping with stress and in regulating emotions. 13 Some of the stressors and sociocultural factors mentioned above for both behavior of NSSI and SAs may rise self-regulation difficulties in adolescents and further increase the risk of self-injury. 13 Also, considering the relationship between the above-mentioned risky predictors, it can be thought that some disorders such as depression, anxiety disorders, and atten tion-defic it/hy perac tivit y disorder (ADHD) may increase these risky behaviors in youth. Indeed, researchers have studied psychiatric disorders with self-injurious behavior in youths and have reported very high prevalence figures. In a review that included data from 24 countries, it was found that psychiatric disorders were identified in 81.2% of adolescents with self-injurious behavior. 14 The most common disorders according to the data were depression, anxiety, and alcohol abuse, respectively, as well as ADHD and conduct disorder among youths. In summary, adolescents with a psychiatric disorder show an increased risk for SAs and NSSI. Besides, there is high evidence that some sociodemographic variables also contribute to these behaviors. Identifying variables that mediate between SAs and NSSI may further the comprehension of underlying mechanisms and lead the clinical practice. Here, we aimed to investigate the relationship between SAs and NSSI by evaluating sociodemographic and diagnostic predictors by examining the effects of ADHD symptomatology, anxiety-depression scores, and self-esteem levels. Participants The participants consisted of adolescents who were admitted to the emergency department with the complaint of NSSI or SA between 2016 and 2018 and then evaluated in a "faceto-face" child and adolescent psychiatry outpatient clinic in a training and research hospital. In the specified time, 76 cases with SAs or NSSI were consulted by the department of child and adolescent psychiatry, but a total of 61 adolescents (NSSI = 29, SAs = 32) were included in the study. Fifteen cases were excluded because 2 cases had autism, 1 case had a brain tumor, 10 cases refused to interview, and 2 cases died after suicide. Procedures Adolescents, who were admitted to the emergency department with complaints of suicide or NSSI, were invited to undergo further examination (structured clinical interview and adolescent/parent scales). Written informed consent was obtained from the parents of the adolescents participating in the study. Inclusion time started in 2016 and we excluded those with prior NSSI or SAs. The Kiddie Schedule for Affective Disorders and Schizophrenia-Present and Lifetime Version for Diagnostic and Statistical Manual of Mental Disorders, fourth edition (K-SADS-PL DSM-IV) was used to obtain information on mental health disorders. Patients diagnosed with psychosis and substance addiction were also included in the study. However, those with organic problems were not included in the study, and they were triaged into a separate unit. We excluded cases of self-injury behaviors related to a diagnosis of autism or intellectual disability and accidental self-injury behaviors. In addition, adolescents and their parents who did not know Turkish were not included in the study. In addition, only the first admission data were used for the cases who were admitted to the emergency department more than once during the study period. All study procedures have been approved by the Institutional Review Board. Ethical approval was obtained from the Ethics Committee of Atatürk University, School of Medicine (2018/19-199). Measurements The Kiddle Schedule for Affective Disorders and Schizophrenia for School-Age Children Present and Lifetime: This form is a semi-structured interview form that evaluates the current and past psychopathology of children and adolescents according to DSM-IV. 15 Beck Depression Inventory: This is a 21-item self-report scale that assesses the current severity of depression with a total score ranging from 0 to 63. 16 Beck Anxiety Inventory: This is a 21-item selfreport scale that assesses the current severity of anxiety with a total score ranging from 0 to 63. 17 The Rosenberg Self-Esteem Scale: This scale is a 10-question scale that defines an individual' s general assessment of self-worth. It includes 5 positive words and 5 negative words and is scored using 4 response options ranging from strongly agree to strongly disagree. 18 The level of self-esteem in this test can be summarized as follows: 0 to 1 "high, " 2 to 4 "medium, " and 5 to 6 "low. " Turgay DSM-IV-Based Disruptive Behavior Disorders Child and Adolescent Rating and Screening Scale-parent form: This scale is widely used to determine ADHD subtypes, severity and disruptive behavioral problems based on DSM-IV diagnostic criteria. This parent-reported scale was adapted to Turkish by Ercan. 19 This is a four-point Likert-type scale including: inattention (9 items), hyper activ ity-i mpuls ivity (9 items), opposition/defiance (8 items), and 15 items for conduct disorder. Symptoms are scored on a 0-3-point Likert-type scale by assigning an estimate of severity for each symptom. Higher scores indicate more severe problems. In this study, the total score of each subgroup was used. Data Analysis Descriptive statistics were reported for the basic sociodemographic variables. Also, relations between some variables were evaluated using the chi-square test and the independent sample t-test, as appropriate. Shapiro-Wilk test was used to determine the normal distribution. The Pearson' s and Spearman' s correlation analyses were used, as appropriate. Logistic regression analysis (forward selection) was performed to determine independent risk factors for SAs, including variables that were statistically significant in the univariate analysis. Statistical analyses were performed with Statistical Package for the Social Sciences software version 20 (IBM SPSS Corp.; Armonk, NY, USA). Statistical significance was defined as P < .05. Main Points • Self-injurious behaviors and suicide attempts often co-occur in adolescents. • The difference between self-injurious behaviors and suicide attempts has yet to be elucidated. • Adolescents with suicide attempts may have lower self-esteem, higher depression scores, higher inattention, and hyperactivity/impulsivity (H/I) subscale scores than adolescents with selfinjurious behaviors. • Environmental (rural residency) and cognitive factors (higher inattention scores) may pose a risk for suicide attempts in adolescents. Results In total, 61 adolescents aged 12-17 years were included in the study. Fifty-one (83.6%) were females and 10 (16.4%) were males. The median age was 15 years and the interquartile range (IQR) was 15-16 years. Thirty-two adolescents (52.5%) had SAs. There was no significant gender difference between SAs and NSSI, but adolescents were predominantly female in both groups (84.4% vs. 82.8%, χ 2 = 0.29, P = .865). Table 1 presents the sociodemographic features and the comparison between adolescents with SAs and NSSI. According to the answers of the cases to the question of why they exhibit this behavior, the conditions causing these behaviors were, respectively, unhappiness (n = 10), trauma history (n = 9), anger management problem (n = 8), hopelessness (n = 4), and psychotic delusions (n = 1) in the SAs group. In the NSSI group, the most common reasons were anger management problems (n = 13), trauma history (n = 7), hopelessness (n = 5),unhappiness (n = 3), and psychotic delusions (n = 1), respectively. Also, related to trauma history, 5 of the SAs group and 2 of NSSI group reported having been sexually abused. However, there was no significant difference between the groups (P = .269). The rates of smoking, alcohol abuse, and substance abuse were higher in adolescents with SAs, but there was no significant difference between the groups (P = .609, P = .307, and P = .674). In both groups, there was a family history of having a parent with aggressive behavior, and the rates were high but there was no significant difference between the 2 groups (SA: 62.5% and NSSI: 72.4%). The rates of pregnancy stress history were 58.6%(n=17) in the NSSI group and 78.1%(n = 25) in the SAs group. There was also a significant difference between the 2 groups in terms of residency (P = .017). The rates of intrafamilial conflict between the groups were also as follows: SAs: 59.5% and NSSI group: 40.5%. However, there was no difference between the 2 groups. There was no significant difference between the 2 groups in terms of age, ethnicity, socioeconomic status, education level, school success, pregnancy stress, physical illness history, and family history of mental illness (P > .05). Both groups were compared in terms of BDI, BAI, and RSES. In adolescents with SAs, depression scores, and inattentive and hyper activ ity/i mpuls ivity (H/I) subscale scores were higher and self-esteem was lower than in the group with NSSI (Table 2.). Table 3 presents the correlation analyses between scale scores and some variables. There was a positive correlation between the year of self-injury (the total duration of years after self-injurious behavior has started) and age (r p = 0.271, P = .035). There was also a positive correlation between inattentive and hyper activ ity/i mpuls ivity subscale scores (r p = 0.470, P = .01). Besides, there was a positive correlation between RSES and BAI scores (r p = 0.374, P = 0.01), RSES, and BDI scores (r p = 0.644, P = .01). Discussion Adolescents with SAs showed higher depression scores, higher inattentive and hyper activ ity/i mpuls ivity symptom scores, and lower self-esteem compared to the NSSI, whereas all scale scores combined with other predictors, inattentive symptom scores remained consistent in contributing to the difference between the groups. These results support the findings of previous research showing the effect of ADHD symptomology on suicidal behavior. 21 Studies have found a relationship between ADHD and risk factors for suicide and selfinjurious behaviors. 22 The severity of ADHD symptomatology has also been shown to have a significant relationship with these behaviors. 23 In this study, inattention and hyper activ ity/i mpuls ivity scores were found to be associated with suicide attempts. As is known, suicide attempt poses a greater risk of death than self-injurious behaviors. Given the association of ADHD symptomatology with a range of other high-risk behaviors, it is not surprising that the SAs group had higher ADHD scores than the NSSI group. However, hyper activ ity/i mpuls ivity scores were not an adjustable predictor of SAs according to the regression model. The fact that the hyper activ ity/i mpuls ivity symptom scores remained significant in the pairwise comparison but not in the regression model may suggest its mediating role rather than a direct effect on SAs. Also, this result may be due to the small sample size. The groups were similar in terms of age, gender, education, and economic status. Interestingly, the adolescents were predominantly female in both groups. Considering that the participants were recruited according to the admissions to the emergency room, it can be thought that females in both groups needed more emergency service support than males. Also, the rate of youth living in rural areas was higher in the SAs group than in the NSSI group. Studies have shown that adolescents living in rural areas are almost twice as likely to die by suicide as those living in urban areas. 24 Factors contributing to adolescent suicides in rural areas include mental health, labor shortages, poverty, and increased access to lethal tools. 24 Considering the conditions of the region where our study was conducted, the possible explanation for this relationship was the limited mental health services in rural areas. The higher symptom scores of disorders and probable less access to care may explain the difference in the group with SAs. In line with the literature, depressive and anxiety symptoms were quite high in both groups. 20 In addition, depressive symptom scores were higher in the SAs group than in the NSSI group, even though the depression levels of both groups were severe according to the BDI. A higher depression score posed a risk for SAs; nevertheless, it was not an adjustable predictor of SAs according to our logistic regression model. This may suggest a role for multifactor combined risks rather than a single factor effect for SAs in adolescents. Adolescents in both groups reported high rates of unhappiness, hopelessness, trauma history, anger management problem, self-cutting, and family conflict; also, 5 of the SA group and 2 of the NSSI group had sexual abuse history. It is known that such risky psychosocial and psychiatric factors cause behaviors such as NSSI and/or SAs in adolescents. 25, 26 This may suggest not only that psychiatric disorders pose a significant risk for SAs and NSSI but also that the nature of this risk occurs after adverse life events. Therefore, it is not surprising that NSSI might accompany SAs. 27 Further, due to the multiple risk factors underlying self-harming behaviors, it isn't easy to differentiate NSSI from SA quantitatively. 25 In this study, 65.6% of the SAs group were found to be accompanied by NSSI behaviors. So, high comorbidity may also cause complex differences between SAs and NSSI. In sum, suicide is the second leading cause of death in adolescence. Understanding that adolescents will attempt suicide has important implications for suicide prevention and early intervention. Therefore, identifying contemporaneous risk predictors for SAs is critical. Although there was a common etiology between SAs and NSSI groups, we found some significant differences between the 2 groups. Our findings increase the need for greater awareness of ADHD and depression symptomatology, self-esteem levels, and rural residency in adolescents with suicidal attempts. One limitation of our study included the crosssectional nature of the study which limits providing a clear inference about the directionality of the relationship between sociodemographic and clinical data and SAs in self-injurious adolescents. The other limitation of the study is a small sample size that makes the findings less generalizable to other groups of adolescents with suicidal attempts. The results need to be confirmed by longitudinal studies. Self-injurious behavior and SA which are strong predictors of death are common in adolescents worldwide. Our study shows that socialclinical measures (higher depression scores and lower self-esteem), environmental (rural residency), and cognitive measures (higher inattention scores) differentiate adolescents with and without a history of SAs. Further studies are needed to confirm whether the findings identified in this study differentiate those with SAs from those with NSSI and predict subsequent SAs in a larger sample of adolescents. Also, our study provides clues for clinicians about indiv idual -soci al-en viron menta l interventions that may contribute to the prevention of suicide in adolescents and highlights the need for close monitoring of adolescents with SAs and NSSI. Ethics Committee Approval: Ethical committee approval was received from the Ethics Committee of Atatürk University, School of Medicine (2018/19-199). Informed Consent: Written informed consent was obtained from the parents of the adolescents participating in the study. Declaration of Interests: The author has no conflicts of interest to declare. Funding: The author declared that this study has received no financial support.
2023-03-03T06:17:05.003Z
2023-02-01T00:00:00.000
{ "year": 2023, "sha1": "632cbd9bd8931ec74f22da4f6d4cca3144792301", "oa_license": "CCBY", "oa_url": "https://doi.org/10.5152/eurasianjmed.2023.21287", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ae8f9a5a207c55c62542035555704c6ee47e1b39", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119207343
pes2o/s2orc
v3-fos-license
Asymptotic Spectroscopy of Rotating Black Holes We calculate analytically the transmission and reflection amplitudes for waves incident on a rotating black hole in d=4, analytically continued to asymptotically large, nearly imaginary frequency. These amplitudes determine the asymptotic resonant frequencies of the black hole, including quasinormal modes, total-transmission modes and total-reflection modes. We identify these modes with semiclassical bound states of a one-dimensional Schrodinger equation, localized along contours in the complexified r-plane which connect turning points of corresponding null geodesics. Each family of modes has a characteristic temperature and chemical potential. The relations between them provide hints about the microscopic description of the black hole in this asymptotic regime. I. INTRODUCTION The experimental inaccessibility of the Planck scale motivates searches for indirect windows on the theory of quantum gravity. Quantization of black holes could play an important role in this regard, analogous to that of atomic models in the development of quantum mechanics. In the search for a quantum theory of gravity, the formation and evaporation of black holes as measured by an observer very far from the horizon are generally assumed to be consistent with the basic principles of general relativity and quantum mechanics. Related classical processes, such as waves scattering off the black hole, thus play an important role in constraining quantum gravity. The problem of determining the transmission (T ) and reflection (R) amplitudes of linearized perturbations incident from spatial infinity is central in the study of black holes [1]. Information about the classical black hole encoded in T and R has been associated in some cases with its quantum counterpart [2]. However, in spite of an intensive study of black hole spectroscopy, analytic results for T and R in the general case of a rotating black hole have so far been available only in the low-frequency limit. Isolated, classical black holes, like most systems with radiative boundary conditions, are characterized by a discrete set of complex ringing frequencies ω(n) = ω R + iω I known as quasinormal modes (QNMs) [3]. These resonances play an important role in modeling the time evolution of black hole perturbations; simulations show that at intermediate times they make the dominant contribution. The discrete QNM spectrum, given by the poles of T and R, extends (for fixed quantum numbers) along the imaginary ω-axis to infinitely large |ω I |, so one might suspect that the amplitudes T and R have an interesting structure at large, nearly imaginary frequencies. Numerical studies have revealed a complicated, rich spectrum at * Friends of the Institute for Advanced Study member; Electronic address: keshet@sns.ias.edu † Electronic address: neitzke@ias.edu low frequencies even for a spherical black hole. Highlydamped resonances with |ω R | ≪ |ω I | are known to be less sensitive to the details of the perturbation, suggesting that T and R may admit a simple interpretation in this regime. For example, it has been argued that one can read off the quantum of area of the black hole horizon from the highly-damped QNM frequencies [4]. The transmission-reflection problem has previously been solved analytically in the highly-damped regime for spherical black holes [5,6]. Recently, the highly-damped QNM spectrum of rotating black holes was analytically derived [7]. Here we combine the tools developed in [5] and in [7] to solve the highly-damped transmissionreflection problem for a rotating black hole in four dimensions. The resulting analytic expressions for T and R capture, in addition to the QNM frequencies, various other resonances of the system. We show that these resonances can be identified directly with semiclassical bound states of an effective one-dimensional wave equation. They live naturally along steepest-descent (anti-Stokes) contours between two complex turning points of corresponding null geodesics, and their frequencies satisfy a complex Bohr-Sommerfeld equation. The highly-damped quasinormal modes (QNM), total-transmission modes (TTM), and total-reflection modes (TRM) correspond to three different contours which we interpret as "external," "internal," and "mixed," respectively. The resonant frequencies are ω(n) = ω + 4πiT (n + µ/4), where ∆t = (4iT ) −1 and ω∆t are respectively the time and angular distance elapsed along corresponding null geodesics in the complexified black hole background, and µ is a Maslov index. Following the philosophy of [4] one might hope that all of these highly-damped resonances carry some information about the quantum theory. One way this could happen was proposed in [5]: determining T allows one to calculate the analytically continued spectrum of Hawking radiation escaping from the black hole, and one can look at the result for clues about a microscopic or "dual" description of the same physics, a strategy which has been successful in other spacetimes and frequency regimes in the past [2,8,9]. Indeed, as we will see, our results for a rotating black hole bear an encouraging resemblance to some examples where a dual description has been established. There are simple relations between the parameters T and ω of the three resonant modes: Here T H is the Hawking temperature of the black hole, Ω the angular velocity of the event horizon, m the azimuthal quantum number of the perturbation, and s the spin of the perturbing field. The analytically continued decay spectrum has a Boltzmann-like form, inversely proportional to e (ω−e ωQNM)/2TQNM + 1. These results support the point of view that the QNMs and TTMs correspond to distinct microscopic degrees of freedom, which interact to produce Hawking radiation. The paper is organized as follows. In §II we formulate the transmission-reflection problem for a rotating black hole, derive the amplitudes T and R, and determine some of the resonances. In §III we identify the highly-damped regime as a "classical" limit in which the scattering problem reduces to tunneling between neighboring contours in the complex r-plane, and study excitations corresponding to each contour. §IV reinterprets the results of §II and §III in terms of null geodesics in the complexified black hole spacetime. In §V we study the analyticallycontinued decay spectrum of the black hole in search for hints of an underlying microscopic theory, and discuss analogies with cases previously studied. In §VI we summarize the analysis and discuss its conclusions. Some generalizations to other black holes are presented in Appendix §A. We use Planck units in which G = c = k B = k C = = 1, where k B is the Boltzmann constant and k C = (4πǫ 0 ) −1 is the Coulomb force constant. II. TRANSMISSION-REFLECTION PROBLEM In this section we analytically solve the problem of transmission and reflection for a rotating black hole in the highly damped regime. The general structure of the problem is formulated in §II A. After this we specialize to the case of the rotating black hole. Some physical and mathematical background is laid out in §II B- §II D; in particular, highly-damped perturbations are shown in §II C to be equatorially confined. The boundary conditions are described in detail in §II E. The results are finally derived in §II F, summarized in §II G and interpreted in terms of Boltzmann factors in §II H, where some resonances are also discussed. A. Transmission and reflection Linearized perturbations propagating in black hole spacetimes often satisfy radial equations of the form where z = z(r) is a "tortoise" coordinate defined such that with r + the (outer) event horizon radius. We require that Im(z)/ Re(z) → 0 as r → r + or r → ∞. We impose the purely outgoing boundary condition at the horizon (with respect to the physical line r > r + , i.e. signals travel only into the black hole), where T and R are respectively the transmission and reflection amplitudes for a wave incident from infinity. The precise definition of these boundary conditions is delicate, especially for complex ω, and will be discussed in Section II E. Constancy of the Wronskian of the two independent solutions of Eq. (3) implies a "conservation of flux" relation, valid for arbitrary complex ω, where T and R are the transmission and reflection amplitudes that correspond to a different problem, where the ω-dependent terms in V z have been modified by ω → −ω. A far field analysis for real ω shows that R(ω) R(−ω) is the fraction of energy reflected, so T (ω) T (−ω) is the absorption (transmission) probability [see Ref. 1, and §II B]. B. Teukolsky's radial equation Consider an uncharged rotating black hole of mass M and angular momentum J. Linearized, massless perturbations of the black hole are described by Teukolsky's equation [10]. For scalar perturbations, this equation has been generalized to accommodate a non-zero black hole electric charge Q [11]; in the equations to follow, one must take Q = 0 except for scalar perturbations. The perturbation is decomposed as where (t, r, θ, φ) are Boyer-Lindquist coordinates, and l, m are angular, azimuthal harmonic indices with −l ≤ m ≤ l. The parameter s gives the spin of the field, specializing the analysis to gravitational (s = −2), electromagnetic (s = −1), scalar (s = 0), or two-component neutrino (s = −1/2) fields. We shall henceforth omit the indices s, l, m for brevity. With the decomposition (7), R and S obey radial and angular equations, both of confluent Heun type [12], coupled by a separation constant A. The radial equation is [10] , the outer (positive sign) and inner (negative sign) horizons. We now focus on the highly-damped regime, roughly the limit where |ω I | is larger than any other scale in the problem including ω R , M −1 and M/J, holding l and m fixed. In this limit we may write [13,14] A with A 1 ∈ R. Eq. (8) may be rewritten using Eq. (9) as where with and The q i are related to the Kerr-Newman metric [e.g. Ref. 15] by q 0 = g φφ Σ, Re(q 1 ) = 2mg tφ Σ, where Σ ≡ r 2 + a 2 cos 2 θ vanishes at the ring singularity. Near the horizons, q 0 = (A ± /4π) 2 and Re( is the area of the outer/inner horizon. Teukolsky's radial equation may finally be written [7] in the form of Eq. (3), upon defining f ≡ ∆ (s+1)/2 V 1/2 R and a (nonconventional; cf. [1]) tortoise coordinate The potential V (r) defined in Eq. (11) is multivalued because of the square root. We will choose its branch cuts such that our analysis uses only a single Riemann sheet for V (r), on which as r → ∞ we have V (r) → +1 and z → +r, in agreement with Eq. (4). Eq. (14) shows that z(r) is also multivalued, with monodromy around each of the two simple poles of V (r); this monodromy will play an important role below. The potential appearing in Eq. (3) is given by (derivatives with respect to r), and satisfies V z = O(ω 0 ). It remains finite at r ± , but diverges at the four turning points r i defined by V (r i ) = 0, which are essential to the analysis. The O(|ω| −2 ) term in Eq. (11) should be chosen so that V z vanishes exponentially as z → −∞, Finally we briefly discuss the relation between the wave equation (3) and the physical absorption probability. In [1] it is argued that for electromagnetic and gravitational perturbations the fraction of energy reflected is R(ω) R(−ω). The same result is shown for scalar perturbations in e.g. [16], and for fermions in [17]. In those treatments the radial equation is formulated with a different definition of z and f than we are using; our R(ω) R(−ω) nevertheless agrees with theirs. It follows that T (ω) T (−ω) is the absorption probability in all these cases. S is required to be regular at the regular singular points θ = 0 and θ = π (the poles). This condition picks out a discrete set of solutions S = S l , known as spin-weighted spheroidal wave functions (SWSWF), and corresponding eigenvalues A [for review see Ref. 14, and references therein]. In the scalar case s = 0, the S l reduce to the more familiar spheroidal wave functions (SWF) [18]. When |aω| → ∞ for fixed l, A = O(|aω|) is given by Eq. (9) both for s = 0 [prolate-type SWFs, see Ref. 19, and references therein] and s = 0 [14]. Then the right side of Eq. (17) is dominated by the first term sufficiently far from the poles and from the equator; when m = 0 the condition is |m/aω| 2 < ∼ cos 2 θ < ∼ 1 − |m/aω| 2 . Very near the poles, the second term on the RHS takes over. Both these terms are positive in the highly-damped regime, so we get exponential decay/growth of S everywhere except in the equatorial region. The regular boundary conditions at the poles then require that S decays rapidly away from the equator. This analysis agrees with the known behavior of the asymptotic prolate SWFs, in which the magnitude decreases rapidly with increasing |cos θ| [19]. There is some numerical evidence that this is also the case for the SWSWFs [e.g., Ref. 13, Figure 5]. D. Stokes and anti-Stokes lines Eq. (3) can be solved in the highly-damped regime by evolving f in the WKB approximation [46] along specific contours in the complex r-plane. Such a contour, consisting of anti-Stokes lines defined by Re(iωz) = 0, is constructed as follows. Let r 1 and r 2 = r * 1 be the two complex conjugate roots of q 0 , with Re( r 1 ) > 0 and Im( r 1 ) < 0. The two other roots r 0 and r 3 are real; for Q = 0 they are r 0 = 0 and r 4 = −2 Re( r 1,2 ). Let r 0 , r 1 and r 2 denote the turning points which in the |ω I | → ∞ limit approach r 0 , r 1 and r 2 , respectively (see Figure 1). Near the turning points, (z − z i ) ∝ (r − r i ) 3/2 , where z i ≡ z(r i ). Therefore three anti-Stokes lines emanate from each turning point. Two anti-Stokes lines connect r 1 to r 2 ; one (denoted l 2 ) crosses the real axis between r − and r + , while the other (l 4 ) crosses it at r > r + . The third anti-Stokes line (l 1 ) emanating from r 1 extends to P 1 , where |P 1 | → ∞ and arg(P 1 ) = −π/2. A similar line (l 3 ) runs from r 2 to P 2 , with |P 2 | → ∞ and arg(P 2 ) = +π/2. A Stokes line, defined by Im(iωz) = 0, emanates between every two anti-Stokes lines of each turning point. Figure 1 illustrates the features relevant to the analysis in the complex r-plane. Along anti-Stokes lines, the WKB approximation holds, where we defined f ± (z) ≡ e ±iωz , and z ′ = z(r ′ ) is some reference point. We use the notation to describe this solution to leading order in |ω| −1 along the anti-Stokes line l j . Off the anti-Stokes lines, the solution may also be written as and f s is exponentially small (subdominant) in that region. The coefficients c ± are approximately constant along anti-Stokes lines away from the turning points, and mix with one another near the turning points in a way dictated by the Stokes phenomenon [20]. When an anti-Stokes line is crossed, the dominant and subdominant parts exchange roles; when a Stokes line is crossed while circling a regular turning point at r ′ , where the positive (negative) sign corresponds to a counterclockwise (clockwise) rotation. E. Boundary conditions Next, we implement the boundary conditions Eq. (5) for a rotating black hole. This is slightly subtle for complex ω. A rigorous way to fix the boundary condition at r + is by specifying the monodromy of the solution there, i.e. requiring that f is an eigenvector of the monodromy matrix with a specific eigenvalue. A Frobenius analysis (power series expansion) of the Teukolsky equation at r + shows that there are two independent solutions R(r) = (r − r + ) iωτ k [1 + O(r − r + )] with k ∈ {I, O} corresponding to ingoing, outgoing waves with respect to the physical region outside the black hole (i.e. signals travel out of, into the black hole, see [21]; henceforth). These solutions have monodromies e 2πωτ k on a clockwise rotation around r + , where with positive (negative) sign corresponding to ingoing (outgoing) waves [21]. Here, T H = (r + − r − )/A + is the Hawking temperature, and Ω ≡ Ω + = 4πa/A + is the angular velocity of the (outer) event horizon. The relation between f and R involves an extra factor ∆ s/2 , which is proportional to (r − r + ) s/2 near the horizon and has a monodromy e −πis on a clockwise rotation around it. The two solutions f k (r) near r + thus have monodromies e ±2πωσ+ , where To leading order in |ω| −1 , this expression for the monodromies could also have been obtained by writing the two solutions as e ±iωz and then using the monodromy of z around r + ; that would give σ + = Res r→r+ (V ), as used in [7], which indeed agrees with Eq. (21). The boundary condition requiring outgoing waves at the horizon, can be defined as choosing the solution with clockwise Next we consider the boundary condition at spatial infinity. For ω slightly off the real axis, the boundary condition at r → ∞ can be continued to a point P on the complex r-plane, lying far from the origin on an anti-Stokes line nearest to the real axis [See Ref. 22 As arg(ω) gradually decreases from 0 to −π/2, arg(P ) gradually increases from 0 to +π/2, P eventually becoming nearly imaginary [23]. When Re(ω) = mΩ, the anti-Stokes lines go through a discontinuous change, signaling the presence of a branch cut at these values of ω [47]. For ω R > mΩ, the boundary condition can then be continued to P 2 , implying that f (l 3 ) = {R, 1; z 2 } up to a multiplicative factor. For ω R < mΩ, the boundary condition must instead be continued to P 1 , so f (l 1 ) = {R, 1; z 1 } up to a multiplicative factor. F. Computation Below we will construct a contour which asymptotically approaches P 1 or P 2 , encloses r + , and consists only of anti-Stokes lines. When the contour reaches a turning point, it circles around it, excluding it from the enclosed region. The contour we use to analyze the ω R < mΩ case is shown in Figure 2 below. We fix the boundary condition for a solution f at P 1 or P 2 and then evolve it along the contour in the WKB approximation. The monodromy along the contour must agree with that determined by the boundary condition at r → r + ; this provides a constraint on f . Furthermore, the solution strictly inside the region enclosed by l 2 and l 4 can be approximated by c − f − (since f + is exponentially small here). Evaluating at r + then gives c − = T , while continuing to l 2,4 yields c − = c − (l 2,4 ) [20], so we get a second constraint, T = c − (l 2,4 ). These two constraints completely determine R and T . In the following, results accurate only to leading order in |ω| −1 , derived from the WKB approximation, are indicated with the ≈ symbol; for fractions this generally includes corrections both to numerator and denominator. The case ωR < mΩ First consider the regime ω I < 0, corresponding to time decay, and ω R < mΩ. Starting from P 1 , where f is given by the boundary condition at spatial infinity, f (l 1 ) = {R, 1; r 1 } holds along l 1 till the vicinity of r 1 . We may derive f (l 2 ) by rotating counterclockwise around r 1 , from l 1 to l 2 . This rotation involves crossing two Stokes lines and the anti-Stokes line between them, so f (l 2 ) = {i, 1+iR; r 1 }. In the region enclosed by l 2 and l 4 , f − is the dominant solution; we may therefore determine T directly from c − (l 2 ) as Next we will follow the contour to r 2 along l 2 , toř 1 along l 4 , and then toľ 1 . Since f and z are multivalued functions of r, branched at r + , traversing the contour brings us to another Riemann sheet; we useˇto denote objects on this second sheet. We first write f (l 2 ) with Counterclockwise rotating around r 2 from l 2 to l 4 gives (1 + iR)e −2πωσ+ ;ř 1 } . Finally, f (ľ 1 ) may be obtained by clockwise rotating aroundř 1 , thus crossing a Stokes line. This yields The only singularity of the differential equation enclosed by the contour is at r + . Hence f (ľ 1 ) and f (l 1 ) differ only by the action of the monodromy matrix at r + . Our boundary condition requires that f is an eigenvector of this monodromy with eigenvalue Φ O = exp(−2πωσ + ). This implies two degenerate constraints, c Combining this result with Eq. (23) yields The same contour can be used to calculate T and R at frequency −ω. The only change in the analysis is due to the reversed dominance pattern among f + and f − , each becoming dominant where it was previously subdominant and vice versa. The result is The case ωR > mΩ Next, consider the regime ω I < 0, ω R > mΩ. The analysis can be carried out exactly as in the ω R < mΩ case but with the contour reflected about the real r-axis, and with the boundary condition at spatial infinity continued to P 2 . This yields and Therefore, Similarly, by the same method we used for ω R < mΩ, G. Results It is convenient to introduce the notation where the subscript j ∈ {i, o} indicates that the integration contour crosses the real axis inside (r − < r < r + ), outside (r > r + ) the event horizon. Then Analytic expressions for S j can be directly obtained in terms of elliptic integrals. We shall sometimes use the notations l i ≡ l 2 and l o ≡ l 4 , so l j may be taken as the integration contour of S j . Both S i and S o are real in the highly damped limit, because in that limit r 1 and r 2 are connected by anti-Stokes lines on both sides of the horizon [48] and along these lines Im(ωV dr) = 0 by definition. Note that where the upper (lower) sign corresponds to fermions (bosons), hereafter. Our results for the highly damped regime may now be summarized as and where we defined These results reflect the expected branch cuts in T and R at ω R = mΩ. In the case of T there is no cut for ω I > 0; this is a consequence of the fact that in this regime the boundary condition at the horizon is uniquely defined without analytic continuation, as described in the appendix of [5]. Our results also imply which we will use in our discussion of the greybody factors in §V. As a consistency check, note that these results satisfy the analytically continued flux conservation relation, Eq. (6). H. Boltzmann weights and resonances Both T and R given in Eqs. (37)-(38) have a suggestive structure. Beginning from Eq. (34), expanding V and r i around large |ω| gives and Each term e 2iεSj in Eqs. (37) and (38) thus becomes exp [ε(ω − ω j )/2T j ], and may be interpreted as a Boltzmann weight corresponding to frequency ω, temperature 2εT j and chemical potential ω j . (Alternatively, frequency ω/2, temperature εT j and chemical potential ω j /2.) Moreover, each T j is real, because S j is real to leading order. In addition, from Eqs. (24), (35), Similarly, and Re( ω j ) ∝ m according to Eq. (44). In our conventions, T o is negative and T i positive. However, as the Boltzmann weights appear with different signs in Eqs. (37) and (38), the opposite convention would have been equally natural. We give a speculative thermodynamic interpretation of Eqs. (45) and (46) in §V C. In Figure 3, Noting that T and R diverge when for integer n, we may identify 4πiT o and ω o − 2πiT o respectively as the level spacing and the offset of the highly damped QNM frequencies. For example, the real part of the highly damped QNMs asymptotically approaches Re( ω o ) ∝ m. The QNM spectrum Eq. (47) was shown in [7] to agree with previous numerical computations [13]. In a similar fashion, 4πiT i and ω i − 2πiT i are shown in §III to be respectively the level spacing and offset characterizing another type of resonant frequencies known as total transmission modes (TTMs). The asymptotic frequencies of the TTMs of a rotating black hole have so far been unknown. Low-lying TTMs of a Schwarzschild black hole were discussed in [24,25]. In the extremal limit a → M we have T i → 0 and ω i → mΩ, so the TTMs coalesce to frequency mΩ. In the limit a, Q → 0 we have T i → ∞, so no TTMs exist in this limit in the highly damped regime. III. RESONANCES AS EXCITATIONS In this section we further examine the asymptotically damped black hole resonances. These resonances include the standard quasinormal modes (QNMs), but also include other interesting families of modes; one might call all of them "quasinormal" in the sense that they decay with time, but in what follows we stick to the standard terminology. As we will see, each mode that we discuss can be associated with a semiclassical state localized along one or two specific anti-Stokes lines, independent of the boundary conditions at the horizon or spatial infinity. The corresponding eigenstates and eigenvalues depend only on the integral of the potential V along these lines. The eigenvalue frequencies of the various modes satisfy a complex Bohr-Sommerfeld equation. In the QNM case, this equation was shown in [7] to reproduce earlier numerical results. Our analysis suggests that in the highly-damped regime, scattering off the black hole can be effectively described in terms of a few coupled, one-dimensional, semiclassical systems. This picture fully reproduces the resonances inferred from §II. This section is organized as follows. In §III A we show that the wave equation becomes semiclassical for the inverted potential which appears naturally along anti-Stokes lines, define the corresponding eigenstates and derive their eigenvalues. Next, we discuss four types of eigenstates: (i) excitations along l 4 corresponding to quasinormal modes are discussed in §III B; (ii) excitations along l 2 corresponding to total transmission modes are described in §III C; (iii) excitations circling l 2 and l 4 , corresponding to total reflection modes, are discussed in §III D; and (iv) internal excitations along l 5 , associated with the behavior around r − , are discussed in §III E. This last family of excitations does not appear directly in T or R, so they are not strictly speaking resonances of the black hole, but from our present point of view they appear to be natural objects to consider. The main properties of these modes are summarized in Table I. The emerging picture of a connected system of black hole excitations is summarized in §III F. A. Highly-damped resonances as semiclassical excitations of the inverted potential Eq. (3) can be interpreted as a Schrödinger equation describing a particle of "energy" E z = ω 2 subject to a potential V z . When |ω I | is very large, E z is approximately real and negative, so we are looking at the classically forbidden case E z ≪ −|V z | ≤ 0. However, the problem can be continued to a classically-allowed one, by replacing z with a Wick rotated coordinate x ≡ iz, giving This is now a Schrödinger equation for a particle with energy E x = −ω 2 in the inverted potential V x (x) = −V z . The energy is approximately real and positive and |V x | ≪ E x almost everywhere, motivating a semiclassical analysis. The coordinate x is in general complex, but it is approximately real along contours where Re(ωx) = 0. These contours are the anti-Stokes lines defined by Re(iωz) = 0, discussed in §II and depicted as solid contours in Figure 1. To avoid confusion, henceforth we refer to these contours as excitation lines. Although V x is in general complex, this makes little difference when |V x | ≪ E x , which holds true along most of each excitation line l. This condition breaks down near the turning points x i = iz i , but in these regions is real and negative along l, so Eq. (48) can still be considered as a real Schrödinger equation. Furthermore, V x diverges at the turning points, suggesting that the excitation lines can be regarded as one-dimensional potential wells. We may therefore study bound states, determined by applying the wave equation (48) to each excitation line l in the system. Note that black hole QNMs have previously been studied by inverting the potential and mapping the resonances to bound states, in special cases (for example scattering off a slowly rotating black hole in the eikonal limit) where the potential can be approximated by a Pöschl-Teller potential [26]. In the highly-damped limit, the eigenstates and eigenvalues corresponding to the bound states are determined, as usual, by a Bohr-Sommerfeld rule derived from the semiclassical (WKB) approximation where p x is the classical momentum corresponding to Eq. (48), and n ∈ Z, where |n| ≫ 0 is the number of nodes of f along l. The number µ is the Maslov index [see for example Ref. 27] which counts the π/4 phase shifts associated with the turning points traversed by l. In the highly damped limit, to order O(|ω| −1 ) Eq. (50) becomes where j is the index of the excitation line or combination of lines. With the appropriate choice of orientation for l j , we may identify the classical actions S 2 and S 4 with the S i and S o defined in Eq. (34). As in Eq. (42), we expand S j = (4iT j ) −1 (ω − ω j ) + O(|ω| −1 ), with T j and ω j defined as in Eqs. (43)- (44). This yields the discrete, infinite eigenvalue spectrum of excitation frequencies generalizing the QNM condition of Eq. (47). The resonances all have ω I < 0 (recall that for ω I > 0, T and R are constants), so nT j < 0. Recall that S j and T j are purely real in the highly damped limit because along the excitation lines, by definition, ωV dr ∈ R. Eq. (44) implies that Re(ω j ) = Re( ω j ) ∝ m; in particular, when m = 0, the real parts of the resonant frequencies vanish to order |ω| −1 . The presence of bound states in the system, if only along specific lines in the complex r-plane, suggests that their eigenvalues may have physical significance. Indeed, in §III B- §III E it is shown that applying Eq. (51) to each excitation line reproduces a certain resonance mode of the black hole. For example, excitations along l 4 correspond to the QNMs. Note that this definition of the excitations does not involve fixing the boundary condition at spatial infinity or at the horizons. Rather, Eq. (51) determines the semiclassical eigenstates locally, purely in terms of (the integral of) V along l j . The Stokes phenomenon determines the relation between the wavefunctions along adjacent excitation lines. This, as well as the exponential decay of the wavefunction in time, makes it natural to view the excitation lines as coupled to one another. Indeed, the analysis of the transmission-reflection problem in §II could be rephrased in the language of tunneling through the potential barriers at the turning points; we discuss this in §III F. For convenience we define ρ ≡ −iω, such that WKB modes f ± = e ±iωz = e ±iρx with a plus (minus) sign travel toward (away from) spatial infinity. B. Quasinormal modes The most familiar type of black hole resonance is a quasinormal mode (QNM). These linear, damped modes dominate the intermediate-time behavior of black hole perturbations. The discrete QNM frequencies, which correspond to poles of the transmission and reflection amplitudes T and R, may be determined by studying perturbations that satisfy purely outgoing boundary conditions at both the event horizon and spatial infinity along the physical interval r + < r < ∞. The highlydamped QNM frequencies were derived analytically for spherically-symmetric black holes in [23], and for a rotating black hole in [7]. Now we propose to identify these resonances with bound states confined along an excitation line. Which line should we consider? In the classical picture of the QNM, the potential barrier on the interval r + < r < ∞ plays an important role; one pictures this barrier as "ringing" and emitting energy toward the horizon and spatial infinity. This motivates the suggestion that the QNMs correspond to the excitation line l 4 , as it intersects the real r-axis at a point r l4 located just outside the event horizon. A second motivation is that R and T , both of which develop a pole at the QNM frequencies, are the amplitudes of the WKB modes f ± ∝ e ±iρx along l 4 (see Figure 2). These rough arguments lead to the right conclusion: Eq. (51) applied to l 4 , with µ = 2 phase shifts associated with r 1 and r 2 , precisely agrees with the highly damped QNM condition of [7] for a rotating black hole. This equation may be rewritten as which is indeed the location of the poles in T and R, as seen from Eqs. (37)- (38). As ω approaches one of the QNM frequencies given by Eq. (53), T and R diverge while satisfying |T | ≈ |R|. So the QNM excitations reduce to standing waves along l 4 , decaying exponentially in time. One might heuristically understand this time decay as follows: for ω R < mΩ (ω R > mΩ), the outgoing -into the black hole -part of the wavefunction, T e −iρx , gradually tunnels across the turning point into l 2 [and into l 3 (l 1 )], whereas the ingoing part, Re +iρx , tunnels its way to l 1 (l 3 ), thereafter escaping to spatial infinity. C. Total-transmission modes A less frequently explored type of black hole resonance is the total-transmission mode (TTM) [49]. These modes, defined by R = 0, can be studied as perturbations that are purely outgoing at the event horizon and purely ingoing at spatial infinity. Like the QNMs, the TTMs are associated with a specific excitation line. To guess which line it should be, note that Eqs. (23), (30) give T ≈ 1. This implies that the wavefunction along l 2 becomes a (damped) standing wave, f (l 2 ) ≈ −iεe +iρx + e −iρx , suggesting that excitations along this line could correspond to the TTMs. Furthermore, along l 2 the reflection amplitude R does not appear as the coefficient of either WKB component (see Figure 2). Indeed, applying Eq. (51) to l 2 yields e 2iS2 + 1 = exp 2i l2 ωV dr + 1 = 0 , which is the condition for the numerator of R to vanish, thus determining the TTM frequencies. Note that f (l 2 ) ≈ −iεe +iρx + e −iρx implies that c + (l 1 , l 3 ) = 0 for ω R < mΩ, ω R > mΩ, so the TTM excitation cannot escape from l 2 to spatial infinity. Total transmission modes occur in various physical settings in which two systems are connected through tunneling across a barrier. It is generally found that the frequencies of total transmission into a system coincide with its metastable eigenfrequencies [28]. This suggests that the TTM frequencies of a black hole could coincide with the eigenenergies of some internal black hole degrees of freedom. In a sense this is what we have found in the highly damped limit: the TTM frequencies of the classical black hole coincide with the energies of bound states along the line l 2 , which is "internal" to the black hole in the sense that it meets the real axis at a point r l2 behind the event horizon, r − < r l2 < r + . The description of the TTMs as excitations along l 2 uses the analytic continuation of the metric behind the event horizon. The physical significance of such a continuation is of course unclear. However, we emphasize that the resonant frequencies themselves do not depend on this continuation. The modes may be defined by imposing the appropriate boundary conditions at r + and as r → ∞. The resonant frequencies can then be derived using Teukolsky's equation along r + < r < ∞, for example in the method of [29]. When T ≈ 0, the wavefunction assumes the same form along l 2 and along l 4 , f ∝ e +iρx , describing a purely traveling wave. The TRMs can therefore be identified for ω R < mΩ as excitations clockwise circling l 2 and l 4 , traveling from r 1 to r 2 along l 2 and back to r 2 along l 4 , and vice versa for ω R > mΩ. This result is the condition for the numerator of T to vanish in Eq. (37), and therefore indeed determines the TRM frequencies. Note that modes with the opposite orientation, counterclockwise (clockwise) rotating for ω R < mΩ (ω R > mΩ), are precluded by the Stokes phenomenon (such a mode would be dominant on the Stokes lines which run to r + , but then crossing these lines would introduce components of the other WKB mode). The integral in Eq. (55) can be evaluated by residues, in which case the only contribution comes from the singularity at r + . This suggests that the TRMs are in some sense associated with the event horizon. Note also that the expression ∓(e 4πωσ+ − 1) = e (ω−mΩ)/TH ± 1 is the inverse of the spectrum of Hawking's thermal radiation from the horizon. The association between TRMs and Hawking radiation will be revisited in §V. The TRM frequencies inferred from Eq. (55) are This expression for the TRM frequencies holds also for non-rotating black holes, where Ω = 0. In §V B it is shown that Eq. (56) is exact -there are no O(|ω| −1 ) corrections. E. Inner horizon modes One more excitation line, l 5 , lies in the Re(r) > 0 region. This line emanates from the turning point r 0 and circles the inner horizon r − , as shown in Figure 1 [51]. Excitations associated with l 5 are not directly relevant to the scattering process discussed in §II and do not appear in T and R, because this line is not directly connected to the lines l 1 − l 4 . We may nevertheless calculate the eigenstates and eigenvalues of excitations associated with l 5 . The excitation frequencies are given by Eq. (51), with integration carried out along l 5 and µ = 0. The only contribution to the integral arises from the singularity at r − . The result is and Ω − ≡ 4πa/A − are the temperature and angular velocity of the inner horizon, respectively. As in the case of TRMs, only one orientation, f ∝ e +iρx , is possible. Eq. (57) and the resonant frequencies it implies, demonstrate that these modes are associated with the inner horizon. The excitation line l 5 does cross the real axis near r − , at two points: close to the ring singularity r 0 = 0 (if Q = 0) and at a point r l5 lying between r − and r l2 , so r − < r l5 < r l2 < r + . We therefore call these modes inner horizon modes (IHMs). There is a formal resemblance between the IHMs and the TRMs, the latter similarly associated with the outer horizon. Although l 5 is not connected to the other excitation lines discussed above, there is a special case where we can nevertheless relate l 5 to the boundary condition at spatial infinity. Namely, when the latter is purely outgoing [f (r → ∞) ∝ e iωz ], the asymptotics at l 2 can be continued directly to l 5 , implying that f (l 5 ) ∝ e −iωz ∝ e −iρx . Such a continuation cannot be carried out for more general boundary conditions at spatial infinity. F. Summary: connected semiclassical systems The results of this section show that highly-damped perturbations of a rotating black hole may be described in terms of three inter-connected lines: (i) l 1 or l 3 (depending on ε), admitting waves that travel to/from spatial infinity; (ii) l 4 , corresponding to the near environment of the black hole, carrying the QNM excitations that can tunnel out to l 2 and to infinity through l 1 /l 3 ; and (iii) l 2 , describing some internal black hole region between r − and r + and carrying the TTM excitations, which can be excited by a wave incident from spatial infinity but cannot directly escape to infinity. Combined, l 2 and l 4 form a loop that carries the TRM excitations, modes circling the event horizon which are related to Hawking radiation. Each Boltzmann factor (see §II H) in Eqs. (40) and (41) is associated to one of these types of excitations. Each of the excitation lines is connected to two other lines at the turning points. Since the effective potential diverges at these turning points, we can view each excitation line as a "potential well" supporting bound states. The wavefunction can tunnel from one line to an adjacent one while picking up a phase shift, as dictated by the Stokes phenomenon. This provides a heuristic picture of the manner in which excitations can decay and possibly interact. Each excitation line l j crosses the real axis at a single point r lj , corresponding physically to an equatorial ring ( §II C); l 4 corresponds to a ring just outside the outer horizon, near the peak of the potential barrier, while l 2 is associated with an internal ring lying between r − and r + . The complex-plane connections between the different excitation lines directly relate the behavior of the perturbation along disconnected, distant rings. IV. COMPLEX GEODESICS In the preceding sections, the one-dimensional wave equations (3) and (48) were analyzed with little reference to the underlying (3+1)-dimensional metric. Since radiation propagates along null geodesics in the large ω limit, one might expect that quantities playing a role in our analysis, such as the characteristic spacing and offset of the resonant frequencies, should be understandable in terms of null geodesics in the complexified metric. In this section we show that this is indeed the case. In §IV A we review some generalities on the analytically continued null geodesics and identify r 1,2 as turning points of these geodesics in the small impact parameter limit. In §IV B we focus our attention on geodesics in the equatorial plane, and show the role they play in our analysis. A. Geodesics We study the complexified geodesic trajectories of a massless particle with angular momentum p φ = m, complex energy E = ω, and Carter's (fourth) constant of motion [30] fixed to some Q C . Along a null geodesic, the derivatives of Boyer-Lindquist coordinates with respect to the affine parameter λ are then [15] where the square root branches in Eqs. (59) and (62) are chosen independently, and we recall Σ = r 2 +a 2 cos 2 θ. To leading order in |ω| −1 , Eq. (59) becomesṙ ≈ Er −2 q 1/2 0 ; so in the highly damped limit r 1 and r 2 approach the turning points of the complexified geodesics whereṙ = 0. The covariant momentum p r is determined by the constants of motion as [31] Using this together with Eqs. (11)-(13), the quantity ωV which was crucial for the WKB analysis may be expanded around large ω as where we defined V s ≡ q The resonant frequency equation (51) can now be written to order |ω| 0 as a complexified Bohr-Sommerfeld rule [7] 2 lj p r dr = π n + µ j 4 , where the excitation lines l j can be understood as contours of steepest descent of ωV connecting the geodesic turning points r i which lie at the endpoints of l j . In order to reproduce the resonant frequencies to order |ω| −1 , the integrand should be replaced by p r = p r + isV s + iA 1 V A . B. Equatorial geodesics Focusing on the equatorial region, we may replace Eqs. (59)-(62) by the lowest order terms in their expansion about θ = π/2. To this order, Q C = 0. On the equator,ṫ also vanishes to leading order in |ω| −1 at the turning points r i , regardless of arg(ω). More generally, on the equator r 1 and r 2 are turning points where bothṙ andṫ vanish simultaneously, in the limit of small impact parameter b ≡ p φ /E in which |b| ≪ a, and in particular when p φ = 0. The Boltzmann factors of §II H can now be related to the equatorial null geodesics. Consider the expansion of the action S ≈ (ω − ω)/(4iT ), where T and ω are given by Eqs. (43)-(44) and we have omitted the index j of the excitation lines for brevity. A direct comparison between these quantities and Eqs. (59)-(61), after substituting θ = π/2, gives to leading order in |ω| −1 and where ∆t and ∆φ are respectively the time and the azimuthal angle elapsed along the integrated geodesic. Moreover, using V A = −(2 cos θ) −1 (±dθ/dr), where we defined a logarithmically-stretched angular coordinate the last approximation valid near θ = π/2. So the integral of ωV between any two values of r is The solution to the transmission-reflection problem in Eqs. (40)-(41) may thus be rewritten in terms of the more physical quantities associated with a null geodesic. Eq. (70) is seen to be a restatement of the result S ≈ p r dr, because along null geodesics p r dr = ω dt−m dφ− p θ dθ. It follows that the highly-damped resonant frequencies corresponding to a given excitation contour l are determined by where ∆t, ∆φ, ∆ζ, and V s dr are calculated along l, and are all imaginary. As an example, for a closed, clockwise contour that encircles r + we find ∆t = (2iT H ) −1 , ∆φ = Ω∆t, ∆ζ = 0, and V s dr = iπ. Plugging these quantities into Eq. (71) with µ = 0 yields the TRM frequencies of Eq. (56). Altogether we have found that the resonant frequencies ω(n)/2π can be understood as harmonics of a fundamental (imaginary) frequency (2∆t) −1 plus an offset ω/2π + µ/8∆t, such that ∆t and ω∆t are associated respectively with the time and with a generalized angular distance (including m∆φ and iA 1 ∆ζ, as well as µand spin terms) elapsed along a null geodesic corresponding to the relevant excitation line. Somewhat similar connections have been suggested by studies of black holes in the eikonal limit, where approximate expressions for the QNMs were inferred from the decay of wavepackets which travel initially along unstable closed orbits [26,32]. V. BLACK HOLE DECAY AND GREYBODY FACTORS In this section we sift the results of the preceding sections for clues about the quantum description of the black hole spacetime. The analytically continued spectrum of Hawking radiation escaping from the black hole is presented in §V A and §V B. In §V C we recall some examples where a similar spectrum was found to correspond to a dual conformal field theory (CFT), and speculate on the microscopic description underlying the present case. A. Decay spectrum First, recall that for real frequency ω the transmission amplitude provides information about the Hawking radiation emitted from the black hole, as observed from spatial infinity. In [33] it is argued that this observed spectrum Γ(ω) is related to the absorption probability σ(ω) by where n H (ω) denotes the spectrum of pure blackbody radiation at temperature T H and potential mΩ, and σ(ω) acts as a "greybody factor" which filters this thermal spectrum. There is some arbitrariness in how one continues Hawking's formula to complex ω; we make a choice which will be convenient for what follows, namely Upper (lower) signs correspond to emission of fermions (bosons), above and henceforth. B. Exact cancellation of Hawking spectrum In the expression for the decay spectrum in Eq. (74) the pole of the spectrum n H in Eq. (73) cancels with the zero of T (ω) T (−ω) in Eq. (40). Based on our arguments so far, though, one might have thought that this cancellation is only approximate and the exact analytically continued spectrum would have poles and zeroes separated by a distance O(|ω| −1 ). Actually, the zeroes and poles cancel one another exactly. The reason is that the boundary condition Eq. (5) manifestly requires T (ω) = 0, so T (ω) = 0 is possible only if Eq. (5) breaks down. But this equation breaks down only when the two solutions near r = r + have the same monodromy, since then we cannot pick out a solution uniquely by specifying its monodromy. Inspection of Eqs. (20) or (21) indicates that this condition is equivalent to vanishing of the denominator of n H in Eq. (73). This argument applies quite generally, in particular to the spherical black holes analyzed in [5]. We have thus shown that T can have zeros only where n H has poles. This directly relates the TRM frequencies to the poles of n H . In Appendix A it is shown that T does indeed have such zeros in a large class of black holes in the highly-damped limit. C. Speculations on the microscopic description As shown in §V A, there is a pleasantly simple expression for the decay spectrum at large imaginary frequencies, given in Eq. (74). But what could its physical meaning be? Examples of known dual CFTs Recall that computations of the same quantity at small real frequencies have in the past given information about quantum gravity in black hole backgrounds [2,8,9]. For example, consider scalar emission from a four-dimensional, slowly rotating (Ω ≪ 1/M ) black hole in the regime ω ≪ 1/M . The corresponding decay spectrum given in [9] can be written as where P 2l+1 is a polynomial of order 2l + 1. Near BPS saturation (Q = M − ǫ and a 2 ∼ Qǫ for small ǫ > 0) the degrees of freedom of the black hole are described by a chiral (0, 4) superconformal field theory, and a SCFT computation of the decay spectrum agrees precisely with Eq. (75) [9]. A second example is scalar emission from a fivedimensional, non-rotating black hole. In a certain "dilute gas" limit, the decay spectrum is [9] where a positive (negative) sign corresponds to odd (even) l. Again, this agrees with a stringy computation of the black hole decay spectrum [9]; these results were important precursors of the AdS/CFT correspondence. In both Eqs. (75) and (76) there are characteristic denominator factors, which have the form of partition functions of ensembles constructed from the degrees of freedom of the microscopic CFT. In the case of Eq. (75) the relevant CFT is chiral, so we see only one type of bosonic excitation, at temperature T H . In Eq. (76) the CFT is non-chiral, and the left-moving and right-moving sectors have different temperatures T L , T R , obeying The appearance of a product of two denominator factors reflects the fact that emission takes place only when leftmoving and right-moving excitations collide. Although the excitations can be fermionic or bosonic with conformal weights h L = h R = (l + 2)/2, bosonic statistics of the outcoming scalar emission is ensured by h L − h R = 0. A third and last example is the (2 + 1)-dimensional asymptotically anti-de Sitter BTZ black hole [34]. Here the QNM spectrum is given by [35,36], where n, k L,R ∈ Z, and Λ is the cosmological constant. The excitation temperatures T L,R characterize respectively the left-and right-moving Virasoro algebras. These temperatures also satisfy Eq. (77). The angular momentum of the perturbation is given by The conformal weights h L,R satisfy with ∆J = m in the present study. The decay spectrum of Eq. (74) in the present analysis contains a structure similar to the above examples: in particular a Boltzmann weight with characteristic temperature and chemical potential appears in the denominator, related to the highly-damped QNM spectrum. To compare our results with the case of a slowly rotating black hole in Eq. (75), consider the highly damped results in the a → 0 limit. Here |2T o | → T H , so the decay spectra in Eqs. (74),(75) have a similar Boltzmann factor. At low frequencies and non-negligible rotation, the Kerr decay spectrum is probably more formally similar to the two other (BTZ and extremal 5D) examples given above, because Kerr QNMs in this regime fall into two families [29], implying that two Boltzmann-like factors appear in the denominator of Γ. information about the microscopic degrees of freedom of the rotating black hole in the highly damped frequency regime. Here we present a few speculations in that direction. We took |ω| much larger than all other scales, so one might expect that the physics in this regime is scale invariant; hence we might try to interpret these degrees of freedom as belonging to a "dual" CFT. The decay spectrum in Eq. (74) should then be proportional to an analytically-continued thermal correlation function of the CFT, and the QNM frequencies should be related to the poles of its retarded thermal correlators. What can we say about the degrees of freedom of this CFT? A clue comes from Eqs. (45) and (46), and from their formal similarity to Eqs. (77) and (79)-(81). Consider a pair of thermodynamic systems at temperatures T 1 and T 2 , with chemical potentials µ 1 and µ 2 , coupled to the environment only through processes where each system changes its internal energy by the same amount, and similarly for the particle number: dU 1 = dU 2 and dN 1 = dN 2 . Now we view the pair as making up a single combined system, with dU = dU 1 + dU 2 and similarly for dN , dS, with S the entropy. For reversible processes dS 1,2 = (1/T 1,2 )dU 1,2 + (µ 1,2 /T 1,2 )dN 1,2 , so We interpret this as saying that the combined system has effectively This is just what we found in Eqs. (45), (46), where the two subsystems are the ones associated with QNMs and TTMs, and the thermodynamics of the combined system are just the usual ones expected for the black hole! Even the statistics of the emitted particles, determined by the imaginary part of the chemical potential, arise as a sum of contributions from the two subsystems. On this basis we propose that the dual description should involve two distinct sets of degrees of freedom, somehow related to QNMs and TTMs. Speculations on partitions of the black hole into two subsystems, involving relations similar to Eq. (82), have appeared before in e.g. [37]. This proposal is similar to what happened in the second and third cases we reviewed above, where the two subsystems consisted of right-and left-movers in the CFT, and entered in a symmetrical way [52]. In our case the two subsystems are associated with QNMs and TTMs, and there is no symmetry between them; in particular, the emission spectrum includes a denominator Boltzmann factor associated with QNMs but none for TTMs. Perhaps the correct picture here involves a single excitation associated with QNMs decaying into two quanta, one of which enters the subsystem associated with TTMs while the other emerges as Hawking radiation. As argued in §III, QNMs and TTMs are related to classical bound states along l o and l i , respectively. This suggests that the two sets of microscopic degrees of free-dom correspond somehow to l o and l i , or more generally to geodesics that cross respectively outside and inside the outer horizon. When l o and l i are combined, the loop formed admits traveling waves which are related to TRMs and therefore to Hawking radiation. This pictorially parallels the above suggestion that microscopic degrees of freedom corresponding to the QNM and TTM sectors interact to produce Hawking radiation. It is possible that there is a relation between interactions among degrees of freedom involved in the production of Hawking radiation on the microscopic side, and interactions between excitations along l o and l i forming loop excitations on the classical side. If so, the classical picture discussed in §III illustrates why TTMs are not seen in Eq. (74), and supports the notion that the production of a Hawking quantum involves the decay of a QNM-related quantum into the TTM sector. The excitations along the contours l i,o are semiclassical, so heuristically the probability to find an excited quantum at a point x, P (x) ∝ |E x − V x (x)| −1 , is inversely proportional to the classical velocity and substantial only near the turning points r 1,2 . It is natural to speculate that the relevant dual description is similarly "localized" around those two turning points, by analogy to the dual descriptions of extremal black holes, which are localized near the horizon. Moreover, in our analysis of the resonance spectrum the starring role was played by complexified geodesics which connect the two turning points. This is somewhat reminiscent of the discussion of asymptotically AdS black holes in [38,39]; there one has a dual description localized at the two boundaries of the spacetime, and correlators of very massive scalars between these two boundaries are dominated by complex geodesics connecting them. These correlators in particular determine the massive QNM spectrum. It would be interesting to understand whether there is any connection between the two situations. VI. SUMMARY AND DISCUSSION This paper analyzes the spectroscopic properties of a rotating black hole in the highly-damped frequency regime. More precisely, it is a study of the evolution of linear perturbations of a massless field with arbitrary spin, in the spacetime of a four-dimensional rotating, charged (for s = 0) black hole, in the large, nearly imaginary frequency range. Our analysis and main conclusions are as follows. 1. Evidence is presented (in §II C) to show that highly-damped perturbations are equatorially confined, with a characteristic opening angle ∆θ ∼ |m/ωa|. 2. The problem of transmission and reflection is analytically solved ( §II) using the WKB approximation, Stokes phenomenon and monodromy match-ing, as illustrated in Figure 2. The resulting expressions for T and R are given in Eqs. (37)- (41). (a) The analysis exploits two complex WKB turning points r 1,2 and the steepest-descent (anti-Stokes) lines l j emanating from them in the complex r-plane, as shown in Figure 1. The results depend essentially on two integrals S o,i [Eqs. (34), (70)] running along two of these contours, l o,i , which cross the real axis respectively outside the outer event horizon and between the inner and outer horizons. (c) The points r 1,2 asymptotically approach complex-conjugate turning points of small impact parameter null geodesics, in whichṙ = t = 0 ( §IV A). (d) T and R have poles and zeros corresponding to quasinormal (QNM), total transmission (TTM) and total reflection (TRM) modes. Their properties are studied in §II H and §III, summarized in Table I 3. Each black hole resonance corresponds to a semiclassical bound state of the Wick-rotated wave equation (48) along a specific contour l j , independent of the boundary conditions at the horizon and spatial infinity. 81)] between the partition functions of ensembles constructed from two sectors of a dual CFT whose excitations interact to produce Hawking radiation. (c) We speculate ( §V C) that QNMs and TTMs similarly correspond to distinct sets of microscopic degrees of freedom of some unknown dual description of the black hole, which interact to produce Hawking radiation. Linearized perturbations of a rotating black hole are characterized by two time scales -the horizon lightcrossing time and the rotation period -which are of the same order of magnitude far from the Schwarzschild and the extremal limits. Analyses of perturbations with a single time-scale and radiative boundary conditions are complicated by strong damping. The highly-damped regime studied in this paper is more susceptible to analytical methods because the decay rate is taken to be much faster than the characteristic inverse time scale. In this regime the analysis is simplified by focusing on certain contours l o,i in the complex r-plane. As in previous studies, such contours play an important role in the WKB analysis. In addition, they provide a semiclassical, essentially one-dimensional description of the black hole interactions with its environment. The black hole resonances can be modeled as bound states of the Wickrotated wave equation along l o,i , and scattering off the black hole can be understood in terms of tunneling between these contours. In the highly-damped regime, the transmissionreflection amplitudes and the corresponding resonances are rather insensitive to the details of the potential barrier surrounding the black hole. Any frequencyindependent "contaminant" potential may be added to the potential in Teukolsky's equation or to V x , V z without changing any of our results to leading order in |ω| −1 . This frequency regime is universal in the sense that the results depend simply on (and are formally independent of) s, l and m, and are periodic in ω I . Moreover, since we keep ω R finite, the Boltzmann weights appearing in the result are finite in this limit, which allows us to determine e.g. whether the denominators correspond to fermionic or bosonic statistics. In sum, the combination of analytic results, robustness and universality makes the highlydamped regime a particularly interesting place. On the other hand, it is far from clear how one should physically interpret the results of scattering computations in this regime; here we have only presented a few speculations in that direction. The analysis presented here does not directly apply to the Schwarzschild case a = 0, where the turning points coalesce to r = 0, nor to the extremal case M 2 −a 2 −Q 2 = 0, where the inner and outer horizons merge to cutoff l 2 . It does hold arbitrarily close to these limiting cases [7]. Moreover, much of our discussion extends beyond the four-dimensional rotating black hole, including for example the connections between the resonance spectrum and coordinate distances along geodesics. In Appendix A we show how the computations of T and R in the highly damped frequency regime can be generalized to a large class of black hole backgrounds, and demonstrate how the QNM and TTM conditions may be written in terms of the corresponding geodesics. Previous studies of the QNMs of the Kerr black hole show that they fall into two families, only one of which survives to the highly damped regime [7,13]. It was argued numerically [40,41] and analytically [42] that the other family of QNM frequencies approaches ω R = mΩ before disappearing. Our analysis suggests an explanation for this behavior: we found that the branch cut in T and R is naturally placed at ω R = mΩ. Perhaps the other family of modes hides behind this cut.
2007-09-30T09:53:07.000Z
2007-09-11T00:00:00.000
{ "year": 2007, "sha1": "d4547d9c20690bce7844118e0475fc0cc34a19f3", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0709.1532", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "d4547d9c20690bce7844118e0475fc0cc34a19f3", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
229155944
pes2o/s2orc
v3-fos-license
Dirty Paper Coded Rate-Splitting for Non-Orthogonal Unicast and Multicast Transmission with Partial CSIT A Non-Orthogonal Unicast and Multicast (NOUM) transmission system allows a multicast stream intended to all receivers to be jointly transmitted with unicast streams in the same time-frequency resource blocks. While the capacity of the two-user multi-antenna NOUM with perfect Channel State Information at the Transmitter (CSIT) is known and achieved by Dirty Paper Coding (DPC)-assisted NOUM with Superposition Coding (SC), the capacity and the capacity-achieving strategy of the multi-antenna NOUM with partial CSIT remain unknown. In this work, we focus on the partial CSIT setting and make two major contributions. First, we show that linearly precoded Rate-Splitting (RS), relying on splitting unicast messages into common and private parts, encoding the common parts together with the multicast message and linearly precoding at the transmitter, can achieve larger rate regions than DPC-assisted NOUM with partial CSIT. Second, we study Dirty Paper Coded Rate-Splitting (DPCRS), that marries RS and DPC. We show that the rate region of DPCRS-assisted NOUM is enlarged beyond that of conventional DPC-assisted NOUM and that of linearly precoded RS-assisted NOUM with partial CSIT. I. INTRODUCTION Non-Orthogonal Unicast and Multicast (NOUM) transmission, also known as layered division multiplexing (LDM) [1], has been gaining increasing attentions recently. It is considered to be a promising solution to support a mixture of unicast and multicast services for future wireless communication networks. Different from conventional approaches where unicast and multicast services are carried out in orthogonal resource blocks, NOUM allows each user to receive a dedicated unicast message and a multicast message simultaneously based on Superposition Coding (SC) at the transmitter and Successive Interference Cancellation (SIC) at the receivers. There are two conventional approaches studied in the literature of NOUM for multi-antenna Broadcast Channels (BC). The first approach is the practical Multi-User Linear Precoding (MU-LP)-assisted NOUM [2]- [4] where the encoded multicast and unicast streams are linearly precoded and superimposed at the transmitter, each user decodes and removes the multicast stream with the assistance of one layer SIC before decoding its intended unicast stream. The other approach is the non-linear Dirty Paper Coding (DPC)-assisted NOUM This work has been partially supported by the U.K. Engineering and Physical Sciences Research Council (EPSRC) under grant EP/N015312/1, EP/R511547/1. relying on DPC to encode unicast messages and SC to encode multicast messages. It is also known as "multi-antenna BC with a common message" in [5]- [7]. DPC-assisted NOUM has been shown in [7] to achieve the capacity region of two-user multi-antenna NOUM with perfect Channel State Information at the Transmitter (CSIT). However, when the transmitter only has access to partial Channel State Information (CSI), the capacity and the capacity-achieving strategy of multi-antenna NOUM remain an open problem. Even in the conventional unicast-only multi-antenna BC with partial CSIT, the capacity and capacity-achieving scheme are still unknown. Interestingly, we have shown in our most recent work [8] that Dirty Paper Coded Rate-Splitting (DPCRS), that relies on Rate-Splitting (RS) to split user messages into common and private parts, and DPC to encode the private parts, enlarges the rate region of conventional DPC in Multiple-Input Single-Output (MISO) BC with partial CSIT. Moreover, linearly precoded RS, which has been widely studied in multi-antenna networks [9]- [13], is able to achieve larger rate region than DPC in multi-antenna BC with partial CSIT. The application of linearly precoded RS in multiantenna NOUM has also been recently studied in [14]. By splitting the unicast messages of users into common and private parts, jointly encoding the multicast message and the common parts of the private messages into a super-common stream, linearly precoding super-common stream and private streams, RS-assisted NOUM has been shown to achieve higher spectral and energy efficiencies than the conventional MU-LPassisted or Non-Orthogonal Multiple Access (NOMA)-assisted strategies thanks to its robustness and flexibility to manage interference [14]. In this work, we first study the performance of DPC and linearly precoded RS in multi-antenna NOUM with partial CSIT. We show that linearly precoded RS is able to achieve larger rate regions than DPC-assisted NOUM. Motivated by the performance benefits of the linearly precoded RS-assisted NOUM as well as the non-linear DPCRS frameworks in the unicast-only transmission, we further propose a novel DPCRSassisted NOUM transmission strategy relying on splitting unicast messages into common and private parts, encoding the private parts by DPC and encoding the common parts together with the multicast message at the transmitter. We show that such DPCRS-assisted NOUM achieves larger rate regions than conventional DPC-assisted NOUM with partial CSIT. II. SYSTEM MODEL Consider a single-cell downlink transmission, which consists of one multi-antenna Base Station (BS) equipped with N t antennas simultaneously serving K single-antenna users, indexed by K = {1, . . . , K}. Hybrid unicast and multicast services are provided in the system. In each scheduled time frame, the BS delivers one multicast message W 0 to all users and K dedicated unicast messages W k , k ∈ K to the corresponding users. The K +1 messages are encoded into the stream vector s and linearly precoded by the precoding matrix P. The resulting transmit signal is x = Ps, which is subject to the transmit power constraint E{ x 2 } ≤ P t . Under the assumption that E{ss H } = I, we obtain that tr(PP H ) = P t . The signal received by user-k is where h k ∈ C Nt is the channel between the BS and userk. n k ∼ CN (0, 1) is the Additive White Gaussian Noise (AWGN). Hence, the transmit Signal-to-Noise Ratio (SNR) is equal to P t . A. Partial Channel State Information In this work, we assume the CSI of each user is perfectly known at users (i.e., perfect CSIR) and partially known at the BS (i.e., partial CSIT The joint distribution of {H, H} is assumed to be stationary and ergodic [10]. Though H over the entire transmission is unknown at the BS, the conditional density f H| H (H| H) is assumed to be known at the BS. Each element of the kthcolumn of H for user-k is characterized by an independent and identically distributed (i.i.d.) zero-mean complex Gaussian distribution variable with E{ h k h H k } = σ 2 e,k I. The variance of the error σ 2 e,k is considered to scale exponentially with SNR as is the CSIT quality scaling factor [10], [15]- [18]. α = 0 and α = ∞ stands for partial CSIT with finite precision and perfect CSIT, respectively. B. Conventional Dirty Paper Coding-Assisted NOUM Conventional DPC-assisted NOUM relying on DPC to encode unicast messages and SC to encode multicast messages has been studied in [6], [7] for two-user NOUM with perfect CSIT. In this work, we study its performance in the partial CSIT setting. At the transmitter, the unicast messages W k , k ∈ K are encoded using DPC based on certain encoding order π, where π [π(1), . . . , π(K)] is a permutation of {1, . . . , K} such that the message W π(i) is encoded before W π(j) if i < j. The BS encodes the unicast messages W π(1) , . . . , W π(K) into a set of symbol streams s π(1) , . . . , s π(K) and precodes the streams by p π(1) , . . . , p π(K) based on DPC, where p π(k) ∈ C Nt is the precoder for s π(k) . The multicast message W 0 is encoded into the multicast stream s 0 , precoded by p 0 ∈ C Nt and superimposed on top of the unicast streams. The resulting transmit signal is where At user side, each user first decodes the multicast stream s 0 into W 0 by treating the interference from all unicast streams as noise. The instantaneous rate at user-π(k) to decode the multicast stream s 0 is given as Once the common message W 0 is decoded, it is then removed from the received signal by SIC. Assuming perfect SIC, the received signal at user-π(k) after removing W 0 is Note that since DPC at the BS is implemented based on the channel estimate H, only the interference part h H π(k) i<k p π(i) s π(i) is removed from the received signal. The instantaneous rate of decoding the unicast stream s π(k) at user-π(k) is Since the BS does not know the exact channel H, precoder design based on instantaneous rate may be overestimated and unachievable at each user. Therefore, a more robust approach is to design precoders according to the Ergodic Rate (ER), which characterizes the long-term rate performance of each stream over all possible joint fading states {H, H}. The ERs of decoding s 0 and s π(k) at user-π(k) for conventional DPC-assisted NOUM are defined as 1 To ensure s 0 is successfully decoded at all users, the ER of the multicast stream s 0 should not exceed C. Proposed Dirty Paper Coded Rate-Splitting-assisted NOUM In this work, we aim at exploring larger rate regions of multi-antenna NOUM with partial CSIT by marrying the benefits of DPC and RS. The proposed strategies, as illustrated in Fig. 1, are respectively specified in the following. 1 The achievability of R DPC 0,π(k) and R DPC π(k) follows the discussion in Remark 1 of [8]. 1) 1-DPCRS: The first strategy we proposed is 1-layer DPCRS (1-DPCRS) in Fig. 1(a). The unicast message W k intended for user-k, ∀k ∈ K is first split into one common part W c,k and one private part W p,k . The common parts W c,1 , . . . , W c,K of all users are combined with the multicast stream W 0 into the super-common message W c and encoded into the super-common stream s c to be decoded by all users. With a certain encoding order π, the private parts W p,1 , . . . , W p,K are encoded and precoded by DPC. Denote the stream vector and precoding matrix as s [s c , s π(1) , . . . , s π(K) ] T and P [p c , p π(1) , . . . , p π(K) ], the resulting transmit signal is Similarly to the DPC-assisted NOUM, each user-π(k) first decodes the super-common stream s c into W c by treating the interference from all private streams as noise. The instantaneous rate R 1-DPCRS c,π(k) (H, H) at user-π(k) to decode the super-common stream s c is defined in the same way as the right-hand side of (4) by replacing p 0 with p c . Based on SIC, the decoded super-common message W c then goes through the process of re-encoding, precoding, and subtracting from the received signal. User-π(k) then decodes the intended private stream s π(k) . The instantaneous rate at user-π(k) to decode the private stream stream s π(k) is defined in the same way as the right-hand side of (6), i.e., H). Once W c and W p,π(k) are decoded, user-π(k) reconstructs the original multicast and unicast messages by extracting W 0 and W c,π(k) from W c , and then combines W c,π(k) with W p,π(k) into W π(k) . The ERs R 1-DPCRS c,π(k) , R includes the ER of transmitting the multicast stream as well as the ERs of transmitting the common parts of unicast streams. Denote the ER allocated to the multicast message W 0 as C 0 and the ER allocated to W c,k as C k , we obtain that C 0 + k∈K C k = R 1-DPCRS c . The ER of decoding the unicast stream W π(k) at user-π(k) using 1-DPCRS is 2) M-DPCRS: M-DPCRS is an extension of 1-DPCRS by embracing the generalized RS framework proposed in [8]. The idea is to split the unicast message of each user into more different parts and encode into multiple layers of common streams, each is intended to one subset of users. The multicast stream is still encoded with some common parts of unicast messages into the super-common stream to be decoded by all users. Due to page limitation, the system model of M-DPCRS is not specified here. It can be easily traced out from M-DPCRS in Section II.C of [8] for MISO BC and 1-DPCRS in Fig. 1(a) for multi-antenna NOUM. Fig. 1(b) illustrates one example of the proposed M-DPCRS for NOUM when K = 3. III. PROBLEM FORMULATION AND OPTIMIZATION FRAMEWORK In this section, we formulate the weighted average sum rate maximization problem and the specify the corresponding optimization framework to solve the problem. A. Weighted Average Sum Rate Maximization Problem We study the precoder optimization problem at the transmitter with the aim of maximizing the Weighted Ergodic Sum Rate (WESR) of unicast messages subject to the Quality of Service (QoS) rate constraints of multicast and unicast messages. The WESR is defined as k∈K u k R 1-DPCRS k,tot , where u k is the weight for user-k. We further define the Average Rate (AR) of decoding the stream s i , i ∈ {c, k} at user-k, k ∈ K for a given channel estimate H and precoder P( H) as where The rate vector c = [ C 0 , C 1 , . . . , C K ] for 1-DPCRS-assisted NOUM contains the rates allocated to the multicast message W 0 as well as the common parts of the unicast messages W p,1 , . . . , W p,K for each H. It is required to be jointly optimized with the precoders so as to maximize the WASR. R th π(k) is the QoS rate constraint of the unicast message W π(k) and R th 0 is the QoS rate constraint of W 0 . Compared with problem (19) in [8] for 1-DPCRS-assisted MISO BC, the main difference of problem (11) comes from constraints (11b) and (11d) due to the additional multicast message W 0 to be transmitted for all users. The WASR problem of DPC-assisted NOUM is formulated by turning off C 1 , . . . , C K in (11). The problem of M-DPCRS-assisted NOUM can be formulated if readers understand problem (20) in [8] for M-DPCRS-assisted MISO BC and problem (11) for 1-DPCRS-assisted NOUM. B. Optimization Framework The formulated problem (11) is stochastic and non-convex. To solve the problem, we extend the optimization framework proposed in [10]. Specifically, we first transform the original stochastic problem into a deterministic form by using the Sample Average Approximation (SAA) approach. The approximated deterministic problem is further transformed into an equivalent Weighted Minimum Mean Square Error (WMMSE) problem, which is then solved by using Alternating Optimization (AO) algorithm. Each step of the optimization framework is further explained in the following. SAA is first adopted to approximate the stochastic AR (10) With the introduced channel sample in (12) and the strong Law of Large Number (LLN), the ARs R 1-DPCRS i,k ( H) specified in equation (10) is equivalent to Denote , H as the sampled AR with sample size M , problem (11) is transformed into its deterministic form, which is given by where the precoder P and the common stream allocation vector c are designed over all the M channel samples. The approximated deterministic problem (14) is still nonconvex due to the non-convex approximated rate expressions of the common stream and the private streams. To solve the non-convex problem (14), we further extend the WMMSE algorithm proposed in [10], [19]. User-π(k) employs equalizer g i π(k) to decode data stream s i . The Mean Square Error (MSE) of stream s i , i ∈ {c, π(k)} at user-π(k) is is the introduced weight for MSE of userπ(k). By taking the equalizers and weights as optimization variables, the Rate-WMMSE relationship for an instantaneous channel realization is established as π(k) ( H) when i = π(k). Based on (16), problem (14) is equivalently transformed into the WMMSE problem, which is given by where x = [ X 0 , X 1 , . . . , X K ] is the transformation of c satisfying x = − c. w = {w i,(m) π(k) |i ∈ {c, π(k)}, k ∈ K, m ∈ M} and (18) and (19), respectively; 6 update (P, x) by solving (20) using the updated w, g; π(k) |i ∈ {c, π(k)}, k ∈ K, m ∈ M} are the MSE weights and equalizers, respectively. Problem (17) is block-wise convex with respect to each block of w, g and (P, x) by fixing other two blocks, which motivates us to use AO algorithm to solve the problem. At each iteration [n], for given w (17) is and the solution w [n] = w ⋆ (P [n−1] ) of (17) for given g [n −1] and (18) and (19) is calculated based on the mth channel sample in H (M) . The solutions of weights and equalizers in (18) and (19) satisfy the Karush-Kuhn-Tucker (KKT) conditions of (17). Substituting (18) and (19) back to (17), the optimization problem becomes: Problem (20) is a standard Quadratically Constrained Quadratic Program (QCQP), which can be solved using interior-point methods [20]. Therefore, we obtain the AO algorithm specified in Algorithm 1. The weights w, equalizers g, precoders and common rate vectors (P, x) are updated iteratively until the WASR of the system WASR [n] converges. The convergence proof of Algorithm 1 follows [8], [10], which is not specified here. By using the same method, we could also obtain the formulated problem and the corresponding optimization framework for DPC and M-DPCRS-assisted NOUM. IV. NUMERICAL RESULTS In this section, we evaluate the performance of the proposed 1-DPCRS and M-DPCRS strategies for NOUM. CVX toolbox [21] is adopted to tackle problem (20) that requires to be solved by the interior-point method. The exact channel h k and the channel estimation error h k have i.i.d. complex Gaussian entries drawn from the distributions CN (0, σ 2 k ), CN (0, σ 2 e,k ), respectively and σ 2 e,k = σ 2 k P −α t . The sample size of SAA method is M = 1000. The WESR is obtained by averaging WASR over 100 channel realizations. The precoder initialization for Algorithm 1 follows the methods in [8]. The QoS rate constraint of the multicast stream is R th 0 = 0.5 bit/s/Hz. SNR is 20 dB. We compare the following eight transmission strategies in the results. "1-DPCRS" and "M-DPCRS" are the strategies we proposed in Section II-C. "DPC" is the strategy described in Section II-B. "Generalized RS", "1-layer RS", "SC-SIC", "SC-SIC per group" and "MU-LP" are the linearly precoded strategies proposed in [14] for multi-antenna NOUM. Fig. 2 illustrates the two-user ER region comparison of all strategies. When K = 2, we use the term "DPCRS" to represent both M-DPCRS and 1-DPCRS and use the term "RS" to represent both generalized RS and 1-layer RS since M-DPCRS and generalized RS respectively reduces to 1-DPCRS and 1-layer RS. In all subfigures, DPCRS maintains the largest rate region compared with the rate regions of all other strategies. Interesting, we found that the existing linearly precoded RS studied in [14], benefiting from its robustness towards partial CSIT, outperforms DPC in most of cases. In the three-user case, we study the Ergodic Sum Rate (ESR, i.e., u k = 1, ∀k ∈ K) comparison versus CSIT accuracy in Fig. 3. Overall, M-DPCRS achieves the highest ESR with explicit ESR improvement over DPC, MU-LP and SC-SIC-assisted strategies. Linearly precoded RS strategies (generalized RS and 1-layer RS) outperform non-linear DPC especially in the region with strong CSIT inaccuracy. V. CONCLUSION In this work, we propose a novel strategy, namely, Dirty Paper Coded Rate-Splitting (DPCRS) that incorporates RS with DPC to assess the rate region of multi-antenna nonorthogonal unicast and multicast transmission with partial CSIT. By splitting the unicast messages of each user into common and private parts, using DPC to encode the private parts, jointly encoding the multicast message with the common parts of the unicast messages, DPCRS is able to partially decode the interference and partially treat inference as noise, further restrain the interference between multicast and unicast messages as well as the multi-user interference among unicast messages. Numerical results show that linearly precoded RSassisted NOUM is able to achieve larger rate region than DPC-assisted NOUM but with a much lower hardware and computational complexities. The proposed DPCRS-assisted NOUM outperforms all existing strategies. It is more robust to CSIT inaccuracies, network loads and user deployments.
2020-12-15T02:16:02.756Z
2020-11-01T00:00:00.000
{ "year": 2020, "sha1": "8e55bf9d1a32b66613e5e9b0b96e31eddcd614a5", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2012.07542", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "8e55bf9d1a32b66613e5e9b0b96e31eddcd614a5", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
1614786
pes2o/s2orc
v3-fos-license
Editorial: Scents that Matter—from Olfactory Stimuli to Genes, Behaviors and Beyond Mammals can recognize a large variety of scents that give information about the environment, conspecifics, and other species. The present research topic is focused on “scents that matter,” i.e., scents that indicate stimuli which are crucial for the survival of an organism. These can be positively related stimuli like the smell of familiar conspecifics, mating partners, or food, but also negatively related stimuli like the scent of potential predators, spoiled food, or territorial and aggressive conspecifics. A prerequisite for this important role of scents in animals' lives is that they can be well detected and recognized. During the last decades, our understanding of olfactory perception has been largely improved, mainly inspired by the work of Linda Buck and Richard Axel (e.g., Buck and Axel, 1991), which was awarded by the Nobel Prize in 2004. Many of the scents studied in this research topic are processed by the vomeronasal system (e.g., Haga-Yamanaka et al.; Yu), but quite often the main olfactory system is additionally involved (e.g., Rattazzi et al.). A lot of current research addresses the questions about which molecules activate which olfactory receptors and which molecular cascades are modulated by these receptors, or how the different olfactory receptors and the two olfactory systems work together. In the current research topic the articles of Ben-Shaul, Kelliher and Munger, Rattazzi et al. and Yu provide new perspectives in this interesting field of research. Besides the detection mechanisms of relevant scents, many studies are focusing on the behavioral changes induced by these scents. Most of these studies are analyzing scents signaling potential dangers. One reason for focusing on danger-signaling odors may be that the behavioral effects of these scents are easier to be induced and measured. In addition, it is widely believed that these scents are more critical for fostering the survival of animals. Basically, such danger-signaling scents with aversive-like effects are classified as (a) kairomones, which are emitted by another species such as predators (e.g., Apfelbach et al.; Osada et al.) or (b) pheromones, that are emitted by conspecifics such as alarm pheromones (e.g., Kobayashi et al.; Breitfeld et al.). Both classes of scents warn about a potential threat, which is intended in the case of pheromones, but unintended in the case of kairomones as they lead to a detriment of the emitter (see Nielsen et al.). It is widely believed that predator odors and alarm pheromones are innately recognized, as these stimuli are still effective in laboratory animals that have lived many generations in the absence of predators (Apfelbach et al.; Fendt et al., 2005). In addition to the general impact of predator odors on the behavior of prey animals, an interesting line of research is the identification of active components in these scents. In the case of predator odors, several molecules have been identified so far: trimethlythiazoline (Taugher et al.; Fortes-Marco et al.; summarized in Rosen et al.), different pyrazines (Osada et al.), and pyridines (Brechbuhl et al.), or 2-phenylethylamine (Ferrero et al., 2011). In the present research topic, a number of studies demonstrating that these compounds are able to induce a wide array of defensive responses in laboratory rodents such as avoidance behavior (Wernecke and Fendt; Brechbuhl et al.; Fortes-Marco et al.), freezing (Taugher et al.; Fortes-Marco et al.), risk assessment behavior (Breitfeld et al.), or an inhibition of appetitive-like behavior (Kobayashi et al.), as well as physiological changes like a modulation of blood pressure (Brechbuhl et al.), or breathing (Taugher et al.). Although these single molecules have the advantage that they can be better controlled in an experimental procedure (e.g., concentration), the natural scents, i.e., blends, are usually more efficient in inducing behavioral changes (summarized in Apfelbach et al.). The neural mechanisms underlying the behavioral and physiological changes induced by danger-signaling scents are meanwhile partly understood. In the current research topic, studies are focused on brain sites like the bed nucleus of the stria terminalis (Breitfeld et al.; Taugher et al.), the medial amygdala (Carvalho et al.), the periaqueductal gray (Canteras et al.), and different subnuclei of the hypothalamus (Canteras et al.; Kobayashi et al.). Interestingly, these brain sites are of minor or no importance for learned fear whose neural basis is well understood (Fendt and Fanselow, 1999; LeDoux, 2012), suggesting a clear neuronal differentiation between innate and learned fear. In fear learning, the danger-predicting property of a stimulus is learned by Pavlovian associative learning. Of course, olfactory stimuli can be used for such associative learning, either as unconditioned (Yuan et al.; Fortes-Marco et al.) or conditioned stimuli (Ferry et al.; Yuan et al.). The latter means that a scent without emotional valance can gain danger-predicting, i.e., fear-inducing, properties. Notably, even if a stimulus from another sensory modality is used as a conditioned stimulus in such a fear learning experiment, scents may still play some roles, since they are usually part of the experimental context (e.g., conditioning box, experimenter) and may be associated with the danger simultaneously. In fear learning, the lateral amygdala is important for associating a discrete cue with a danger stimulus, whereas the hippocampus plays an important role in contextual fear learning. Interestingly, novel work of the present research topic demonstrated that different regions of the hippocampus have different roles during contextual fear conditioning with odors (Yuan et al.). In addition to the hippocampus, several cortical areas such as the entorhinal cortex are involved in contextual fear learning (Ferry et al.). So far, there is little research on the effects of danger signaling scents in humans. However, the defensive behaviors induced by danger-predicting scents and the respective physiological changes observed in animals are connected to anxiety in humans. Therefore, one perspective is that a deeper understanding of the neuroanatomical and neuropharmacological basis of odor-induced fear in animals may also help to find new treatment strategies for anxiety disorders in humans. As noted above, scents can also serve as positive stimuli. This is of specific interest in the context of social behavior (Wohr; Noack et al.; Fuzzo et al.) and foraging (Kelliher and Munger). These aspects are also covered by several articles in this special issue. It has been shown that one important function of these scents is to help to recognize social partners (Ben-Shaul; Noack et al.). Thereby, they induce and modulate a variety of behaviors, including ultrasonic calls which are typical for pleasant situations (Wohr). In the case of social buffering, the scent of a conspecific is able to reduce fear (Fuzzo et al.). These two, quite different effects of social scents are mediated by different subnuclei of the amygdala (Fuzzo et al.; Noack et al.). Notably, there is also potential for translational research with “social scents.” For example, a genetic mouse model of autism is less able to modulate ultrasonic vocalization in response to familiar scents (Wohr). The present research topic nicely represents the different approaches used in “olfactory research” of relevant scents. These approaches include cell biology, genetics, behavioral pharmacology, neuroanatomy, as well as computational neuroscience. Scientists from all these fields work effectively together to unravel the mechanisms of how scents matter in humans and animals. We are grateful to all contributors of this research topic. Eighty-five different authors from 10 different countries contributed with research and review articles. Furthermore, we thank the reviewers which helped us and the authors to create an interesting and high-quality research topic. We hope that you enjoy reading this research topic as much as we have enjoyed editing it. Scents that Matter-from Olfactory Stimuli to Genes, Behaviors and Beyond Mammals can recognize a large variety of scents that give information about the environment, conspecifics, and other species. The present research topic is focused on "scents that matter, " i.e., scents that indicate stimuli which are crucial for the survival of an organism. These can be positively related stimuli like the smell of familiar conspecifics, mating partners, or food, but also negatively related stimuli like the scent of potential predators, spoiled food, or territorial and aggressive conspecifics. A prerequisite for this important role of scents in animals' lives is that they can be well detected and recognized. During the last decades, our understanding of olfactory perception has been largely improved, mainly inspired by the work of Linda Buck and Richard Axel (e.g., Buck and Axel, 1991), which was awarded by the Nobel Prize in 2004. Many of the scents studied in this research topic are processed by the vomeronasal system (e.g., Haga-Yamanaka et al.; Yu), but quite often the main olfactory system is additionally involved (e.g., Rattazzi et al.). A lot of current research addresses the questions about which molecules activate which olfactory receptors and which molecular cascades are modulated by these receptors, or how the different olfactory receptors and the two olfactory systems work together. In the current research topic the articles of Ben-Shaul, Kelliher and Munger, Rattazzi et al. and Yu provide new perspectives in this interesting field of research. Besides the detection mechanisms of relevant scents, many studies are focusing on the behavioral changes induced by these scents. Most of these studies are analyzing scents signaling potential dangers. One reason for focusing on danger-signaling odors may be that the behavioral effects of these scents are easier to be induced and measured. In addition, it is widely believed that these scents are more critical for fostering the survival of animals. Basically, such danger-signaling scents with aversive-like effects are classified as (a) kairomones, which are emitted by another species such as predators (e.g., Apfelbach et al.; Osada et al.) or (b) pheromones, that are emitted by conspecifics such as alarm pheromones (e.g., Kobayashi et al.; Breitfeld et al.). Both classes of scents warn about a potential threat, which is intended in the case of pheromones, but unintended in the case of kairomones as they lead to a detriment of the emitter (see Nielsen et al.). It is widely believed that predator odors and alarm pheromones are innately recognized, as these stimuli are still effective in laboratory animals that have lived many generations in the absence of predators (Apfelbach et al.; Fendt et al., 2005). In addition to the general impact of predator odors on the behavior of prey animals, an interesting line of research is the identification of active components in these scents. In the case of predator odors, several molecules have been identified so far: trimethlythiazoline (Taugher et al . Interestingly, these brain sites are of minor or no importance for learned fear whose neural basis is well understood (Fendt and Fanselow, 1999;LeDoux, 2012), suggesting a clear neuronal differentiation between innate and learned fear. In fear learning, the danger-predicting property of a stimulus is learned by Pavlovian associative learning. Of course, olfactory stimuli can be used for such associative learning, either as unconditioned (Yuan et al.; Fortes-Marco et al.) or conditioned stimuli (Ferry et al.; Yuan et al.). The latter means that a scent without emotional valance can gain danger-predicting, i.e., fear-inducing, properties. Notably, even if a stimulus from another sensory modality is used as a conditioned stimulus in such a fear learning experiment, scents may still play some roles, since they are usually part of the experimental context (e.g., conditioning box, experimenter) and may be associated with the danger simultaneously. In fear learning, the lateral amygdala is important for associating a discrete cue with a danger stimulus, whereas the hippocampus plays an important role in contextual fear learning. Interestingly, novel work of the present research topic demonstrated that different regions of the hippocampus have different roles during contextual fear conditioning with odors (Yuan et al.). In addition to the hippocampus, several cortical areas such as the entorhinal cortex are involved in contextual fear learning (Ferry et al.). So far, there is little research on the effects of danger signaling scents in humans. However, the defensive behaviors induced by danger-predicting scents and the respective physiological changes observed in animals are connected to anxiety in humans. Therefore, one perspective is that a deeper understanding of the neuroanatomical and neuropharmacological basis of odorinduced fear in animals may also help to find new treatment strategies for anxiety disorders in humans. As noted above, scents can also serve as positive stimuli. This is of specific interest in the context of social behavior (Wöhr; Noack et al.; Fuzzo et al.) and foraging (Kelliher and Munger). These aspects are also covered by several articles in this special issue. It has been shown that one important function of these scents is to help to recognize social partners (Ben-Shaul; Noack et al.). Thereby, they induce and modulate a variety of behaviors, including ultrasonic calls which are typical for pleasant situations (Wöhr). In the case of social buffering, the scent of a conspecific is able to reduce fear (Fuzzo et al.). These two, quite different effects of social scents are mediated by different subnuclei of the amygdala (Fuzzo et al.; Noack et al.). Notably, there is also potential for translational research with "social scents." For example, a genetic mouse model of autism is less able to modulate ultrasonic vocalization in response to familiar scents (Wöhr). The present research topic nicely represents the different approaches used in "olfactory research" of relevant scents. These approaches include cell biology, genetics, behavioral pharmacology, neuroanatomy, as well as computational neuroscience. Scientists from all these fields work effectively together to unravel the mechanisms of how scents matter in humans and animals. We are grateful to all contributors of this research topic. Eighty-five different authors from 10 different countries contributed with research and review articles. Furthermore, we thank the reviewers which helped us and the authors to create an interesting and high-quality research topic. We hope that you enjoy reading this research topic as much as we have enjoyed editing it. AUTHOR CONTRIBUTIONS MF wrote the first draft of the editorial, all authors revised the manuscript and approved the final version of it.
2016-05-12T22:15:10.714Z
2016-02-09T00:00:00.000
{ "year": 2016, "sha1": "1828f8dac5b4377fcd1ff2c142c65875135a91d2", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnins.2016.00029/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1828f8dac5b4377fcd1ff2c142c65875135a91d2", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
53418672
pes2o/s2orc
v3-fos-license
Time Series Analysis of MODIS-Derived NDVI for the Hluhluwe-Imfolozi Park, South Africa: Impact of Recent Intense Drought The variability of temperature and precipitation influenced by El Niño-Southern Oscillation (ENSO) is potentially one of key factors contributing to vegetation product in southern Africa. Thus, understanding large-scale ocean–atmospheric phenomena like the ENSO and Indian Ocean Dipole/Dipole Mode Index (DMI) is important. In this study, 16 years (2002–2017) of Moderate Resolution Imaging Spectroradiometer (MODIS) Terra/Aqua 16-day normalized difference vegetation index (NDVI), extracted and processed using JavaScript code editor in the Google Earth Engine (GEE) platform was used to analyze the vegetation response pattern of the oldest proclaimed nature reserve in Africa, the Hluhluwe-iMfolozi Park (HiP) to climatic variability. The MODIS enhanced vegetation index (EVI), burned area index (BAI), and normalized difference infrared index (NDII) were also analyzed. The study used the Modern Retrospective Analysis for the Research Application (MERRA) model monthly mean soil temperature and precipitations. The Global Land Data Assimilation System (GLDAS) evapotranspiration (ET) data were used to investigate the HiP vegetation water stress. The region in the southern part of the HiP which has land cover dominated by savanna experienced the most impact of the strong El Niño. Both the HiP NDVI inter-annual Mann–Kendal trend test and sequential Mann–Kendall (SQ-MK) test indicated a significant downward trend during the El Niño years of 2003 and 2014–2015. The SQ-MK significant trend turning point which was thought to be associated with the 2014–2015 El Niño periods begun in November 2012. The wavelet coherence and coherence phase indicated a positive teleconnection/correlation between soil temperatures, precipitation, soil moisture (NDII), and ET. This was explained by a dominant in-phase relationship between the NDVI and climatic parameters especially at a period band of 8–16 months. Introduction Vegetation within protected areas such as game reserves provides wildlife and society with indispensable ecosystem goods and services [1] including food, medicinal resources, aesthetic value, and recreational opportunities [2]. However, inappropriate management and other disturbances affect the potential productivity and spatial extent of this resource [3]. Thus, any factor that poses a threat to vegetation and its associated benefits which could affect their productivity in the protected areas needs to be identified and monitored. One such threat is an increase in temperature above normal as well as a prolonged decline in precipitation and soil moisture, leading to extreme climatic events such as droughts, which severely affect vegetation productivity [4]. Drought-related impacts are becoming more multifaceted, as explained by their rapidly growing consequences in sectors such as recreation and tourism, agriculture, and energy [5]. There are two main rivers that pass through this nature park, namely the Black and White Umfolozi. The entire area of the park is fenced and borders on populated rural communities. Vegetation varies from semi-deciduous forests in the north of Hluhluwe to open savanna woodlands in the southern iMfolozi. Much of the area is dominated by woodland savanna interspersed with shrub thicket [34]. The northern part of the park has hilly terrain and is dominated by forest. The climate is subtropical with summer rainfall. It receives a mean annual rainfall ranging from 700 to 985 mm, much of it occurring between October and March [35]. The park supports approximately 1200 plant species, including 300 tree and 150 grass species. Data In order to investigate the variability of vegetation in the HiP in response to climatic conditions as well as the recent intense drought of 2014-2016, we opted to use the monthly averaged MODIS Terra/Aqua 16-day datasets measured for the period from 2002 to 2017 (16 years). With its considerable time resolution (about for images per month) compared to other satellites, MODIS images were the most appropriate for this study because of the size of the geographic area. The MODIS data used here are archived in the GEE as image collection. This data product is generated from a MODIS/MCD43A4 version 6 surface reflectance composite. More details about the MCD43A4 MODIS/Terra and Aqua nadir BRDF-adjusted reflectance daily level 3 global 500 m SIN grid V006 There are two main rivers that pass through this nature park, namely the Black and White Umfolozi. The entire area of the park is fenced and borders on populated rural communities. Vegetation varies from semi-deciduous forests in the north of Hluhluwe to open savanna woodlands in the southern iMfolozi. Much of the area is dominated by woodland savanna interspersed with shrub thicket [34]. The northern part of the park has hilly terrain and is dominated by forest. The climate is subtropical with summer rainfall. It receives a mean annual rainfall ranging from 700 to 985 mm, much of it occurring between October and March [35]. The park supports approximately 1200 plant species, including 300 tree and 150 grass species. Data In order to investigate the variability of vegetation in the HiP in response to climatic conditions as well as the recent intense drought of 2014-2016, we opted to use the monthly averaged MODIS Terra/Aqua 16-day datasets measured for the period from 2002 to 2017 (16 years). With its considerable time resolution (about for images per month) compared to other satellites, MODIS images were the most appropriate for this study because of the size of the geographic area. The MODIS data used here are archived in the GEE as image collection. This data product is generated from a MODIS/MCD43A4 version 6 surface reflectance composite. More details about the MCD43A4 MODIS/Terra and Aqua nadir BRDF-adjusted reflectance daily level 3 global 500 m SIN grid V006 data can be found in a Climate 2018, 6, 95 4 of 24 study by Schaaf et al. [36]. The data were extracted and processed using the JavaScript code editor in the GEE platform (https://earthengine.google.com/, Mountain View, CA, USA) (see Appendix A), which provides possibilities of parallel computing and large data processing for even very large study areas. For the purpose of this investigation, our main parameter is the NDVI, but we also considered other vegetation indices such as the Enhanced Vegetation Index (EVI), the Burned Area Index (BAI), and Normalized Difference Infrared Index (NDII). The BAI was also included in order to determine the possible vegetation burning activity, which may have been triggered by drier conditions associated with an intense drought period. NDII has been recently proven to be a robust indicator for monitoring the moisture content in the root-zone from the observed moisture state of vegetation [19,21]. These spectral indices were calculated using the formulas: where R, NIR, and SWIR1 are spectral bands in the blue (450-500 nm), red (600-700 nm), near-infrared (700-1300 nm), and shortwave infrared (1550-1750 nm) regions. In this study, we derived the precipitation values averaged for the study area for the period from 2002 to 2017 using the Climate Engine Application (CEA, http://climateengine.org/, Moscow, ID, USA), while soil temperature monthly mean data was derived from the National Aeronautics and Space Administration (NASA, Washington, DC, USA): http://giovanni.gsfc.nasa.gov. Both the soil temperature and precipitation data are an output of the Modern Retrospective Analysis for the Research Application (MERRA-2) model [37]. The MERRA model is an American global reanalysis tool operating from 1979 onwards that is based on the NASA Goddard Earth Observation serving Data Assimilation System version 5 (GEOS-5). The MERRA-2 model data are given at a spatial resolution of 0.67 • × 0.50 • at 1-hourly to 6-hourly intervals. There is always an expected variability of surface water content due to changes in both weather and climatic conditions. Therefore, in a study such as this one, it is essential to always investigate the water lost to the atmosphere through both evaporation and transpiration. This can be an important process as it could explain details about vegetation water stress. Given that the study area is a remote area which does not have evaporation and/transpiration measurements records, we opted to use the Global Land Data Assimilation System (GLDAS) evapotranspiration (ET) data. The GLDAS system was designed to generate optimal fields of land surface and fluxes, and it is also capable of generating quality controlled, spatially and temporally consistent, terrestrial hydrological data including ET and other related parameters [38]. The ENSO phenomenon influences rainfall and temperature conditions largely over southern Africa [39,40]. Previous studies have demonstrated how vegetation responds significantly to ENSO [40] and the DMI [41] index as a measure of climatic conditions [42][43][44] in some parts of southern Africa. Thus, in order to investigate changes in vegetation in the HiP due to variability in climatic conditions, it is important to consider these climate indices. In this study, we used the Niño3.4 monthly mean time series retrieved from the National Oceanic and Atmospheric Administration (NOAA) website (https://www.esrl.noaa.gov/psd/gcos_wgsp/Timeseries/, Washington, DC, USA). The Niño3.4 index is calculated by taking the area-averaged sea-surface temperature (SST) within the Niño3.4 region, which is at 5 • N-5 • S longitude and 120 • W-170 • W latitude in the Pacific Ocean. On the other hand, the DMI is calculated by taking the difference between the SST anomalies in the western (50 • E-70 • E; 10 • S-10 • N) and eastern (90 • E-110 • E), (10 • S-0 • N) sectors of the equatorial Indian Ocean [41]. The DMI data were downloaded from the website: http://www.jamstec.go.jp/frcgc/research/d1/ iod/iod/dipole_mode_index.html. The relevant time series of Niño3.4 and DMI are shown in Figure 2. Multiple-Linear Regression One of the principal objectives of this study is to quantify the effects of temperature, precipitation, ET, soil moisture at root-zone (NDII), ENSO and DMI on the NDVI as a surrogate for vegetation in the study area. Multiple-linear regression analysis (MLR), which is commonly used to explain the relationship between one continuous dependent variable and two or more independent variables, was employed. The MLR model output of a number n observations can be represented as where yi is the dependent variable (NDVI in this case), xip represents the independent variables (soil temperature, precipitation, Niño3.4, and DMI in this case), β0 is the intercept, and β1, β2, … βp are the coefficients of the x terms. The term εi represents the error term, which the model always tries to minimize. Mann-Kendall Test It is always useful to assess the monotonic trends in a time series of any geophysical data. In this study, the Mann-Kendall test [45][46][47] was used. This is a non-parametric rank-based test method, which is commonly used to identify monotonic trends in a time series of climate data, environmental data, or hydrological data. Non-parametric methods are known to be resilient to outliers [48], hence it is desirable to choose such methods. Based on a study by Kendall [47] and recently by Pohlert [49] and others, the Mann-Kendall test statistic is calculated from the following formula: where The average value of S is E[S] = 0, and the variance σ 2 is given by the following equation: Multiple-Linear Regression One of the principal objectives of this study is to quantify the effects of temperature, precipitation, ET, soil moisture at root-zone (NDII), ENSO and DMI on the NDVI as a surrogate for vegetation in the study area. Multiple-linear regression analysis (MLR), which is commonly used to explain the relationship between one continuous dependent variable and two or more independent variables, was employed. The MLR model output of a number n observations can be represented as where y i is the dependent variable (NDVI in this case), x ip represents the independent variables (soil temperature, precipitation, Niño3.4, and DMI in this case), β 0 is the intercept, and β 1 , β 2 , . . . β p are the coefficients of the x terms. The term ε i represents the error term, which the model always tries to minimize. Mann-Kendall Test It is always useful to assess the monotonic trends in a time series of any geophysical data. In this study, the Mann-Kendall test [45][46][47] was used. This is a non-parametric rank-based test method, which is commonly used to identify monotonic trends in a time series of climate data, environmental data, or hydrological data. Non-parametric methods are known to be resilient to outliers [48], hence it is desirable to choose such methods. Based on a study by Kendall [47] and recently by Pohlert [49] and others, the Mann-Kendall test statistic is calculated from the following formula: where The average value of S is E[S] = 0, and the variance σ 2 is given by the following equation: where t j is the number of data points in the jth tied group, and p is the number of the tied group in the time series. It is important to mention that the summation operator in the above equation is applied only in the case of tied groups in the time series in order to reduce the influence of individual values in tied groups in the ranked statistics. On the assumption of random and independent time series, the statistic S is approximately normally distributed provided that the following z-transformation equation is used: The value of the S statistic is associated with the Kendall In regards to the z-transformation equation defined above, this study considered a 5% confidence level, where the null hypothesis of no trend was rejected if |z| > 1.96. Another important output of the Mann-Kendall statistic is the Kendall τ term, which is a measure of correlation which indicates the strength of the relationship between any two independent variables. In this study, the Mann-Kendall test system summarized above was applied to the NDVI data by writing a piece of code in R-project and following the instructions by Pohlert [49]. The Mann-Kendall trend method can be extended into a sequential version of the Mann-Kendall test statistic which is called the Sequential Mann-Kendall (SQ-MK). This method was proposed by Reference [50], and it is used to detect approximate potential trends turning points in long-term time series. This test method generates two time series, a forward/progressive one (u(t)) and a backward/retrograde one (u (t)). In order to utilize the effectiveness of this trend detection method, it is required that both the progressive and the retrograde time series are plotted in the same figure. If they happen to cross each other and diverge beyond the specific threshold (±1.96 in this study), then there is a statistically significant trend. The region where they cross each other indicates the time period where the trend turning point begins [51]. Basically, this method is computed by using ranked values of y i of a given time series (x 1 , x 2 , x 3 , . . . , x n ) in the analyses. The magnitudes of y i , (i = 1, 2, 3, . . . ,n) are compared with y i , (j = 1, 2, 3, . . . , j − 1). At each comparison, the number of cases where y i > y j are counted and then donated to n i . The statistic t i is thereafter defined by the following equation: The mean and variance of the statistic t i are given by and Finally, the sequential values of statistic u(t i ) which are standardized are calculated using the following equation: The above equation gives a forward sequential statistic which is normally called the progressive statistic. In order to calculate the backward/retrograde statistic values (u (t i )), the same time series (x 1 , x 2 , x 3 , . . . , x n ) is used, but statistic values are computed by starting from the end of the time series. The combination of the forward and backward sequential statistic allows for the detection of the approximate beginning of a developing trend. Additionally, in this study, a 95% confidence level was considered, which means critical limit values are ±1.96. This method has been successfully utilized in studies of trends detection in temperature [52,53] and precipitation [51,53,54]. Wavelet Transforms and Wavelet Coherence In this study, we opted to employ the wavelet transform analyses method [55] because of its ability to obtain a time-frequency representation of any continuous signal. Basically, the continuous wavelet transform (CWT) of a given geophysical (in this case) time series is given by transforming the time series into a time and frequency space. While there are several types of wavelets, the choice of the wavelet function is determined by the data series, of which, for geophysical data, the Morlet wavelet function has been shown to perform well [55][56][57]. Thus, the CWT [W n (s)] for a given time series (x n , n = 1, 2, 3, . . . , N) with respect to wavelet Ψ 0 (η) is defined as: where s is the wavelet scale, n is the translated time index, n is the localized time index, and Ψ* is the complex conjugate of the normalized wavelet. δt is the uniform time step (which is months in this case). The wavelet power is calculated from |W n (n)| 2 . In this study, the CWT statistical significance at a 95% confidence level was estimated against a red noise model [55,57]. Using a continuous wavelet transform analysis, it is also possible to quantify the relationship between two independent time series of the same time step δt. In this study, the aim was to quantify the relationship between NDVI averaged for the study area and selected climatic parameters. Following Grinsted et al. [57], for the two time series of X and Y, with different CWT W X n (s) and W Y n (s) values, the cross-wavelet transform W xy n (s) is given by where "*" represents the complex conjugate of the Y time series. The output of the above equation can also assist in calculating the wavelet coherence. Basically, wavelet coherence is a measure of the intensity of the covariance of the two time series in a time-frequency domain. This is an important parameter because the cross-wavelet only gives a common power. Another important process is to calculate the phase difference between the two time series. Here, the procedure is to estimate the mean and confidence interval of the phase difference. Following a study by Grinsted et al. [57], we used the circular mean of the phase-over regions with relatively high statistical significance that are inside the cone of influence (COI) to quantify the phase relationship between any two independent time series. As defined in a study by Zar [58] and also later by Grinsted et al. [57], the mean circulation of a set of angles (a i , i = 1, 2, 3, . . . , n) can be defined by the following equation: Following the studies by Torrence et al. [55,57], the wavelet coherence between two independent time series can be calculated using the following equation: where the parameter S is the smoothing operator defined by S(W n (s)) = S scale (S time (W n (s))). The parameter S time represents the smoothing in time. For further details about the theory of wavelet analyses, the reader is referred to [55,57,59]. Results and Discussion To investigate whether the El Niño event of 2014-2016 can be classified as a strong El Niño event, a time series for the period from the beginning of the satellite era (1980) to 2017 was plotted (see Figure 2a). We also considered the DMI index ( Figure 2b) as a measure of climatic conditions of the eastern part of southern Africa [43]. A general classification of ENSO events should contain 5 consecutive overlapping 3-month periods with SST anomalies below −0.5 for the La Niña events and above +0.5 for the El Niño events. In Figure 2a,b, both the El Niño events and positive DMI are shaded in red, whereas La Niña and negative DMI are indicated in blue. To identify the strength of the ENSO events, the threshold is further broken down to weak (0.5-0.9 SST anomaly), moderate (1.0-1.4 anomaly), and strong (≥1.5 anomaly) events. Figure Figure 3. In this figure, regions where there are greener colors indicate higher NDVI values, whereas the brownish colors indicate low NDVI values. These results show that there seems to be a direct influence of the ENSO in the vegetation of the HiP, especially during strong El Niño years (2014-2016). It is evident that during El Niño years, there was a decline in NDVI values especially in the southern and western parts of the study area. This is presumably because the vegetation of the northern part of the HiP is dominated by a forest which is consist of indigenous trees which are believed to be drought resistant (see Figure 1). Additionally, the contributing factor could be that the eastern part of the HiP is benefiting from orographic lifting as it is situated in a high terrain (see Figure 1). The evidence of the influence of El Niño is more prominent during the strong El Niño years such as 2003 and the recent intense 2014-2016 drought period, as well as the 2008 non-ENSO drought period. Figure 4a shows the deseasonalized monthly averaged MODIS NDVI time series for HiP from 2002 to 2017 (red line) plotted together with the 12 months running mean smooth trend (black dotted line). The monthly mean NDVI values plotted in Figure 4a were calculated by taking an averaged of MODIS images available in that month. In this study area, the MODIS satellite records four images per month. In general, there is a steady trend of NDVI measured at the HiP beside some anomalies observed in specific parts of the time series. This seems to be the case for southern Africa because other studies also indicated a steady trend for this region [10]. Remarkably, during the 2014-2016 period, a period that coincided with the recent intense El Niño, there was a sudden decrease in the NDVI values which reduced to the lowest minimum value of about 0.3 in November 2015. During this period, EVI values also decreased to minimum values of about 0.11 (results not shown here). Climate 2018, 6, x FOR PEER REVIEW 9 of 23 A study by Mberego and Gwenzi [60] investigated the temporal patterns of precipitation and vegetation variability over Zimbabwe during extreme dry and wet rainfall seasons using data covering the period 1981-2005. Their NDVI time series indicated a steady trend over this period. However, it seemed to be strongly affected by severe dry conditions, an observation which is consistent with the results presented here. In this study, the deseasonalized monthly mean NDVI time series in Figure 4a A study by Mberego and Gwenzi [60] investigated the temporal patterns of precipitation and vegetation variability over Zimbabwe during extreme dry and wet rainfall seasons using data covering the period 1981-2005. Their NDVI time series indicated a steady trend over this period. However, it seemed to be strongly affected by severe dry conditions, an observation which is consistent with the results presented here. In this study, the deseasonalized monthly mean NDVI time series in Figure 4a A study by Mberego and Gwenzi [60] investigated the temporal patterns of precipitation and vegetation variability over Zimbabwe during extreme dry and wet rainfall seasons using data covering the period 1981-2005. Their NDVI time series indicated a steady trend over this period. However, it seemed to be strongly affected by severe dry conditions, an observation which is consistent with the results presented here. In this study, the deseasonalized monthly mean NDVI time series in Figure 4a (red line) indicates the possible response that corresponds to both dry and wet years, especially during the most recent strong El Niño events of 2003 and 2014-2016. In relation to the strength of the influence of El Niño in the south-western part of southern Africa, a study by Manatsa et al. [61] analyzed agricultural drought in Zimbabwe using the standardized precipitation index (SPI). They reported the 1991-1992 period as the period which experienced the most extreme drought conditions. A little later, observations by Mberego and Gwenzi [60] reported . This is also verified by a much smoother representation of the NDVI in Figure 4c in which a reduction in NDVI values is observed. This reduction coincides with the most recent strong El Niño. Additionally, a reduction which coincides with the El Niño of 2003 (see Figure 4c). Another significant feature is a strong peak, which reaches~0.8 just after the Irina tropical storm, which occurred in early March 2012 (see Figure 4). There is an expected resemblance between the NDVI and EVI observations in both the monthly mean time series and the smooth trend, with a clear indication of the effect of the 2014-2016 drought period. These observations are consistent with a study by Xulu et al. [10], who investigated the response of commercial forestry to the recent strong and broad El Niño event in a region which is 70 km south-east of the HiP. In their study, Xulu et al. [10] reported a significant decline of NDVI values which corresponded to the 2014-2016 El Niño years [10]. Although the influence of the 2014-2016 El Niño in the HiP seems to be the strongest, it follows the same pattern as that reported by Anyamba et al. [40] in their study of the influence of both El Niño and La Niña in the vegetation status over eastern and southern Africa. Considering the level of browning of vegetation demonstrated in Figure 3 for the years 2014-2016, it is necessary to consider the possible fire activity given the relatively dry conditions in the HiP. Figure 5c indicates that during the period of the intense drought of 2014-2016 there was an increase in fire incidences in the HiP. This is revealed by a rise in the BAI values of the smooth trend to its maximum level of approximately 50 in November 2015. During the 2014-2016 period, the HiP experienced an unprecedented decline in the total precipitation per month (see Figure 5d). During the same period, the soil temperature increased to its highest maximum (see Figure 5e). The GLDAS monthly mean ET time series shown in Figure 5f indicates a declining trend during the period 2014-2016, which indicated a possible vegetation stress. In order to investigate the moisture content at root-zone, the NDII index was used. The NDII (Figure 5g) indicates a similar pattern to that of the NDVI and EVI time series. It is observed in Figure 5a that the NDII had a steady trend (0.10) during the period 2002-2013 which was followed by a sudden decrease which reached a minimum value of −0.06 in November 2015. calculated using the Breaks For Additive Season and Trend (BFAST) method, which is described in details by Verbesselt et al. [62,63]. Basically, the BFAST method integrates the decomposition of time series seasonal, trend, and remainder components of any satellite image time series, and can be applied to any other type of time series in the geosciences that deals with seasonal or non-seasonal time series. The period of the most recent intense drought (2014-2016) is indicated by the grey shaded box in each figure. In general, all the parameters show a seasonal cycle in terms of monthly means. The 12-month running mean smooth trends extracted using the BFAST method for NDVI, EVI, and BAI plotted against anomalies of climatic forcers Niño3.4 and DMI are shown in Figure 6. This plot was constructed to investigate any possible 2-dimensional teleconnection between vegetation condition and the Niño3.4 and DMI climatic forcers, respectively. The panels on the left represent the vegetation indices versus Niño3.4, and the panels on the right show the vegetation indices versus DMI. Both the NDVI (Figure 6a) and EVI (Figure 6b) values show a fairly steady pattern for most parts of the time series, which vary between NDVI values of 0.50 and 0.60, and between EVI values of 0.28 and 0.34. However, both the NDVI and EVI values seem to be enhanced by the extreme amount of rainfall that was brought by the tropical cyclone Irina during early 2012 in the eastern part of southern Africa. In that year, NDVI values increased to a maximum value of approximately 0.62, whereas the more sensitive EVI index reached its maximum of approximately 0.38. The strong peaks that were observed during 2004 for both the NDVI and EVI time series correspond to the greening of vegetation in the HiP which was produced by heavy rainfall that was brought by tropical cyclone Elite in January 2004 [64]. NDVI values were observed to decrease sharply from late 2013 until they reached their minimum of approximately 0.40 in 2015 before beginning to recover to normal average conditions in 2017. This pattern is also depicted in the EVI time series and is directly linked to the stronger and more extensive 2014-2016 El Niño event. Similar results were also presented in a study by Xulu et al. [10], who investigated the influence of recent droughts on forest plantations in Zululand. The notable browning observed in Figure 3 The DMI was highly variable compared to the Niño3.4 climatic forcer throughout the study period, with several distinctive positive DMI values that reached a maximum of just below 1.0. Remarkably, there is a strong peak that extends up to approximately 0.8 during the period band corresponding to 2014-2016 that coincided with the decline in NDVI and EVI and the increase of the ENSO and BAI time series. We note here that the widespread browning observed during the 2014-2016 drought period could have been accelerated by the fact that the climatic forcers, which are known to influence the south-eastern part of southern Africa, may have been in phase during this period. This, of course, needs further investigation; and is discussed below. Correlations Statistics and Mann-Kendall Test The Pearson correlation between NDVI and climatic variables used in this study for the whole study record was derived. Figure 7 shows the heat map which summarizes the linear relationships between all the parameters monitored in this study. In this figure, it is clear that there is a strong correlation between NDVI and Soil temperature (r = 0.35), precipitation (r = 0.43), ET (r = 0.68), and NDII (r = 0.92). On the other hand, there is a significance strong negative correlation between the NDVI and BAI, which is not surprising because greener vegetation reduces chances of biomass burning, while the possibility of the satellite detecting a charcoal signal from burnt vegetation during dry conditions is high. There is also a noteworthy negative (r = −0.27) correlation between NDVI and Niño3.4. The results shown in Figure 7 also reaffirm the strong relationship between soil temperature The DMI was highly variable compared to the Niño3.4 climatic forcer throughout the study period, with several distinctive positive DMI values that reached a maximum of just below 1.0. Remarkably, there is a strong peak that extends up to approximately 0.8 during the period band corresponding to 2014-2016 that coincided with the decline in NDVI and EVI and the increase of the ENSO and BAI time series. We note here that the widespread browning observed during the 2014-2016 drought period could have been accelerated by the fact that the climatic forcers, which are known to influence the south-eastern part of southern Africa, may have been in phase during this period. This, of course, needs further investigation; and is discussed below. Correlations Statistics and Mann-Kendall Test The Pearson correlation between NDVI and climatic variables used in this study for the whole study record was derived. Figure 7 shows the heat map which summarizes the linear relationships between all the parameters monitored in this study. In this figure, it is clear that there is a strong correlation between NDVI and Soil temperature (r = 0.35), precipitation (r = 0.43), ET (r = 0.68), and NDII (r = 0.92). On the other hand, there is a significance strong negative correlation between the NDVI and BAI, which is not surprising because greener vegetation reduces chances of biomass burning, while the possibility of the satellite detecting a charcoal signal from burnt vegetation during dry conditions is high. There is also a noteworthy negative (r = −0.27) correlation between NDVI and Niño3.4. The results shown in Figure 7 also reaffirm the strong relationship between soil temperature with a strong correlation coefficient of r = 0.77. Considering that Figure 2 indicates some episodes where a strong Niño3.4 peak is in phase with the DMI peaks, the noteworthy correlation of r = 0.28 between these two climatic indices seems to reaffirm this. with a strong correlation coefficient of r = 0.77. Considering that Figure 2 indicates some episodes where a strong Niño3.4 peak is in phase with the DMI peaks, the noteworthy correlation of r = 0.28 between these two climatic indices seems to reaffirm this. Figure 8 shows the inter-annual variability of the Pearson linear correlation between the HiP NDVI values and parameters such as BAI, Soil Temp, Prec, Niño3.4, DMI, ET, and NDII for the period from 2002 to 2017. The correlation between NDVI and EVI was not analyzed because the two parameters closely resemble each other. In general, NDVI is positively correlated to soil temperature, precipitation, ET and NDII through the study period. The NDVI-NDII correlation was the strongest positive correlation with an average value of r = 0.91. This reaffirms the strong relationship between vegetation water stress and soil moisture at the root-zone. The NDVI-ET correlation was observed to be steady at an average correlation coefficient value of r = 0.65 during the period from 2002-2013. However, this linear relationship decreased to r = 0.40 and r = 0.30 in 2015 and 2016, respectively. A study by [65] also used MODIS NDVI and GILDAS evapotranspiration data in order to investigate the relationship between NDVI and evapotranspiration. In their study, they reported a steady positive inter-annual variability of correlation coefficients with an average value of r = 0.58. As expected, the NDVI-Niño3.4 correlation is dominated by negative values which are observed to decrease during the periods corresponding to El Niño years. This is consistent with previous studies such as those of Xulu et al. [10,40] who reported a significant influence of ENSO on the vegetation of southern Africa, especially the north-eastern part. Moreover, a salient observation is that the greatest minimum correlation recorded was in 2015, a year with a particularly strong El Niño. The negative correlation between DMI and NDVI also seems to be greater during the recent intense drought period, which could indicate that Niño3.4 and DMI were in phase during this time. The correlation between NDVI and the BAI is expected to be strongly negative as greening is not conducive to biomass burning. However, the results presented in Figure 8 indicate that there was a sudden increase in correlation between NDVI and BAI in 2015 before it returned to its average position in 2016 and 2017. Overall, the inter-annual variation of almost all the study parameters indicates a noticeable change during El Niño events in the years 2003 and more prominently during the 2014-2016 period. The correlation between NDVI and EVI was not analyzed because the two parameters closely resemble each other. In general, NDVI is positively correlated to soil temperature, precipitation, ET and NDII through the study period. The NDVI-NDII correlation was the strongest positive correlation with an average value of r = 0.91. This reaffirms the strong relationship between vegetation water stress and soil moisture at the root-zone. The NDVI-ET correlation was observed to be steady at an average correlation coefficient value of r = 0.65 during the period from 2002-2013. However, this linear relationship decreased to r = 0.40 and r = 0.30 in 2015 and 2016, respectively. A study by [65] also used MODIS NDVI and GILDAS evapotranspiration data in order to investigate the relationship between NDVI and evapotranspiration. In their study, they reported a steady positive inter-annual variability of correlation coefficients with an average value of r = 0.58. As expected, the NDVI-Niño3.4 correlation is dominated by negative values which are observed to decrease during the periods corresponding to El Niño years. This is consistent with previous studies such as those of Xulu et al. [10,40] who reported a significant influence of ENSO on the vegetation of southern Africa, especially the north-eastern part. Moreover, a salient observation is that the greatest minimum correlation recorded was in 2015, a year with a particularly strong El Niño. The negative correlation between DMI and NDVI also seems to be greater during the recent intense drought period, which could indicate that Niño3.4 and DMI were in phase during this time. The correlation between NDVI and the BAI is expected to be strongly negative as greening is not conducive to biomass burning. However, the results presented in Figure 8 A comprehensive summary of the MLR analysis statistics encompassing NDVI, temperature, precipitation, Niño3.4, and DMI is shown in Table 1. It should be mentioned that the soil temperature, precipitation, ET, NDII, and Niño3.4 were used in this model because of their well-known possible influence on NDVI variability. The DMI climatic parameter was not used as an explanatory variable in the MLR model because of its weak correlation with the NDVI. The results in Table 1 reveal a statistically significant relationship between NDVI and soil temperature and between NDII and ET, with p-values of 0.000386, <2.00×10 −16 and 0.000173, respectively. Both Precipitation and Niño3.4 indicate a statistically insignificant association with the NDVI because of p-values which are far greater than 0.05. A positive significant correlation between NDVI and Soil temperature, NDII, and ET, which is also represented as in Figures 7 and 8, indicates that soil moisture, soil temperature, and evapotranspiration play a significant role in vegetation health in the HiP. The significant but negative correlation between Niño3.4 and NDVI confirms the notion that ENSO variability plays a role in the climatic conditions of southern Africa [35,52]. In this study, the Mann-Kendall trend test was used for the analysis of the trend in the HiP NDVI time series. The main advantage of this technique is that it provides a non-parametric test that does not require data to be normally distributed, and it is not dependent on the magnitude of data. Furthermore, this non-parametric test method has a low sensitivity to abrupt breaks in heterogeneous time series [66]. The Mann-Kendall test model was applied to the NDVI data, and the results are shown in Figure 9. In summary, the z-score and p-value for the entire NDVI time series period (2002-2017) were found to be −1.22 and 0.224, respectively. Both the z-score and the p-value seem to indicate that there was a downward but not significant trend in the NDVI data. The indication of an A comprehensive summary of the MLR analysis statistics encompassing NDVI, temperature, precipitation, Niño3.4, and DMI is shown in Table 1. It should be mentioned that the soil temperature, precipitation, ET, NDII, and Niño3.4 were used in this model because of their well-known possible influence on NDVI variability. The DMI climatic parameter was not used as an explanatory variable in the MLR model because of its weak correlation with the NDVI. The results in Table 1 reveal a statistically significant relationship between NDVI and soil temperature and between NDII and ET, with p-values of 0.000386, <2.00 × 10 −16 and 0.000173, respectively. Both Precipitation and Niño3.4 indicate a statistically insignificant association with the NDVI because of p-values which are far greater than 0.05. A positive significant correlation between NDVI and Soil temperature, NDII, and ET, which is also represented as in Figures 7 and 8, indicates that soil moisture, soil temperature, and evapotranspiration play a significant role in vegetation health in the HiP. The significant but negative correlation between Niño3.4 and NDVI confirms the notion that ENSO variability plays a role in the climatic conditions of southern Africa [35,52]. [66]. The Mann-Kendall test model was applied to the NDVI data, and the results are shown in Figure 9. In summary, the z-score and p-value for the entire NDVI time series period (2002-2017) were found to be −1.22 and 0.224, respectively. Both the z-score and the p-value seem to indicate that there was a downward but not significant trend in the NDVI data. The indication of an insignificant downward trend (negative z-score) presumably due to the unprecedented sudden reduction of the NDVI values which coincided with the 2014-2016 drought. In order to investigate the influence of drought conditions in the study area using the Mann-Kendall method, it is necessary to calculate the inter-annual variation of Mann-Kendall z-scores. These Mann-Kendall z-scores were calculated from monthly means for each year starting from 2002-2017. Figure 9a shows the Mann-Kendall z-scores based on the 16 years of monthly average NDVI data for the game reserve. In general, it is expected that vegetation will respond to climate fluctuating conditions, and this is clearly depicted by significantly negative z-scores (less than Z1 = −1.96) during strong El Niño events (e.g., in 2003 and 2014/2015). The significant downward trend observed between 2014 to 2015 is the strongest such downward trend in the history of the MODIS NDVI data used in this study; it demonstrates a clear response of the vegetation of the reserve to the strong El Niño event of 2014-2016. Similar analysis and results comparable with those presented here were reported by Hou et al. [24] in their study on the inter-annual variability in growing-season NDVI and its correlation with climate variables in the south-western Karst region of China. The sequential version of the Mann-Kendall test was applied to the NDVI monthly mean time series so as to determine the approximates time periods of the beginning of a significant trend. Figure 9b shows the sequential statistic values of forward/progressive (Prog) ( ) (solid red line) and retrograde (Retr) ( ) (black solid line) obtained by SQ-MK test for HiP monthly mean NDVI data for the period from 2002 to 2017. There is a noticeable statistically significant downward trend which seems to coincide with the 2003 and strongly the 2014-2016 strong El Niño event. These independent Figure 9a shows the Mann-Kendall z-scores based on the 16 years of monthly average NDVI data for the game reserve. In general, it is expected that vegetation will respond to climate fluctuating conditions, and this is clearly depicted by significantly negative z-scores (less than Z1 = −1.96) during strong El Niño events (e.g., in 2003 and 2014/2015). The significant downward trend observed between 2014 to 2015 is the strongest such downward trend in the history of the MODIS NDVI data used in this study; it demonstrates a clear response of the vegetation of the reserve to the strong El Niño event of 2014-2016. Similar analysis and results comparable with those presented here were reported by Hou et al. [24] in their study on the inter-annual variability in growing-season NDVI and its correlation with climate variables in the south-western Karst region of China. The sequential version of the Mann-Kendall test was applied to the NDVI monthly mean time series so as to determine the approximates time periods of the beginning of a significant trend. Figure 9b shows the sequential statistic values of forward/progressive (Prog) u(t) (solid red line) and retrograde (Retr) u (t) (black solid line) obtained by SQ-MK test for HiP monthly mean NDVI data for the period from 2002 to 2017. There is a noticeable statistically significant downward trend which seems to coincide with the 2003 and strongly the 2014-2016 strong El Niño event. These independent calculations are in agreement with the inter-annual variation of the Mann-Kendall z-scores results presented in Figure 9a. In the case that seems to be associated with the 2014-2016 strong El Niño events, there is an apparent downward trend (indicated by the retrograde) that begins in November 2012 and reaches the negative significant trend limit (−1.96) in April 2014. The retrograde statistic values stay in significant negative territories during the period from April 2014 to May 2016 before it starts to revert back to be within the 95% confidence level limits (±1.96). This trend is regarded as significant because the progressive and retrograde curves intersect each other (turning point) within the limits of the 95% confidence level. This significant trend turning point took place during November 2012. Another significant downward trend was observed in late 2003, with the significant trend turning point observed in June 2005. The intensity of the 2014-2016 drought impact in the HiP has been identified to be identical to that of the early 1980s [11]. Some of the additional factors that reportedly intensified the impact of the 2014-2016 drought include the reduction in the grazing lawns, siltation of rivers, and the increasing number of carnivores [11]. The impact of the 2014-2016 drought did not only affect this natural protected area (HiP), but also the comical plantations which are situated at about 70 km southwest of the HiP [10], [67]. A study by Crous et al. [67] reported a large-scale dieback of Eucalyptus grandis × E. urophylla (SClone) in the Zululand coastal plain, KwaZulu-Natal, South Africa, during the recent intense drought. This was later supported by Xulu at al. [10], where they reported that the commercial forest of kwaMbonambi, northern Zululand suffered drought stress during 2015. Wavelet Analyses In order to analyze the localized variation of the spectral power within the time series, wavelet analyses, the most common tool for this purpose, was conducted. As mentioned earlier, the wavelet method assists by decomposing a time series into a time-frequency space, which makes it possible to determine the dominant modes of variability and how they vary in time. Figure 10 shows the normalized wavelet power spectra for the monthly mean NDVI, precipitation, soil temperature, DMI, Niño.3.4 NDII, and ET data. The results of the EVI wavelet analyses are not shown here because this time series is identical to that of the NDVI. In Figure 10, the blue color indicates low wavelet power, and the yellow color represents areas of high wavelet power. The horizontal axis is the time scale (in years) and the vertical axis is the period (in months). The thick black line represents the 95% confidence level. The areas of the wavelet power that are considered are those which are within the cone-of-influence (indicated by the solid "u" shaped line). The con-of-influence indicates areas where edge effects occur in the coherence data [55,57]. The NDVI of the HiP seems to follow the distinctive pattern of the seasonality of precipitation in the north-eastern part of South Africa. The region experiences rainfall during the summer period (December-February) and dry winter period (June-August). This is confirmed by a statistically significant peak observed at around the 12-month cycle (see Figure 10a), which seems to correspond with that of precipitation (Figure 10b). The wavelet power spectra of soil temperature (Figure 10c), NDII ( Figure 10f) and ET (Figure 10g) also indicate a strong peak at around the 12-month cycle. This is plausible because wet seasons (summer in this case) lead to increased soil moisture and also create conditions of low evapotranspiration and thus accelerate the greening process in the HiP. It should also be noted that the NDVI wavelet power spectra have significant peaks showing the presence of the semi-annual oscillation (6 months On the other hand, the Niño3.4 power spectra exhibit significant power peaks in the 8-32 months band throughout the study period. It should be noted, however, that this frequency of occurrence of peaks observed in the Niño3.4 wavelet spectra is similar to that reported in the studies of Torrance and Compo [55] and also of Jevrejeva et al. [56], who used a much longer time series of the ENSO signal. occurrence of peaks observed in the Niño3.4 wavelet spectra is similar to that reported in the studies of Torrance and Compo [55] and also of Jevrejeva et al. [56], who used a much longer time series of the ENSO signal. The wavelet coherence between NDVI-Niño3.4, NDVI-DMI, NDVI-precipitation, NDVI-soil temperature, NDVI-NDII, and NDVI-ET was investigated to determine whether NDVI significant wavelet spectra peaks observed at a given time correspond with those observed by the other parameters. Furthermore, the phase relationship between NDVI and the other parameters was calculated and superimposed graphically in Figure 11. The phase relationship is represented by arrows, where two cross-wavelet parameters are in phase if the arrows point to the right, anti-phase if the arrows point to the left, and NDVI leading or lagging if the arrows point upwards or downwards, respectively. The vectors were only plotted for areas where the squared coherence is greater or equal to 0.5. More details about these calculations can be found in References [56,57] and later by studies by Schulte et al. [59]. The wavelet coherence between NDVI-Niño3.4, NDVI-DMI, NDVI-precipitation, NDVI-soil temperature, NDVI-NDII, and NDVI-ET was investigated to determine whether NDVI significant wavelet spectra peaks observed at a given time correspond with those observed by the other parameters. Furthermore, the phase relationship between NDVI and the other parameters was calculated and superimposed graphically in Figure 11. The phase relationship is represented by arrows, where two cross-wavelet parameters are in phase if the arrows point to the right, anti-phase if the arrows point to the left, and NDVI leading or lagging if the arrows point upwards or downwards, respectively. The vectors were only plotted for areas where the squared coherence is greater or equal to 0.5. More details about these calculations can be found in References [56,57] and later by studies by Schulte et al. [59]. The local wavelet coherence spectra together with their distinctive cross-spectra phase for NDVI-Niño3.4, NDVI-DMI, NDVI-precipitation, NDVI-soil temperature, NDVI-NDII, and NDVI-ET are shown in Figure 11. In general, all the wavelet coherence spectra indicate that Niño3.4, DMI, precipitation, soil temperature, NDII, and ET do have some degree of coherence with the HiP NDVI in a variety of both periods and timescales. However, it should be mentioned that because statistically, the significant correlation between any two variables being investigated could occur by chance, a significant commonality in a wavelet coherence spectra analysis does not necessarily imply interconnection. Moreover, there is a possibility of smaller areas of wavelet coherence occurring by chance, which would not indicate interconnection, whereas larger areas of significance are improbable due to chance. For this reason, further investigation is required in regard to a possible teleconnection between any two-time series. A study by Torrance and Compo [55] investigated the periodicities present in a much longer time series (1871-1996) of Niño3.4 using Morlet wavelets and reported the domination of periods greater than 12 months, with some episodes of shorter periods also present in their spectra. In this study, the wavelet coherence between NDVI and Niño3.4 indicates smaller or no areas of high power significance, which is understandable because the 16-year monthly mean NDVI time series is dominated by periodicities of less than 16 months (Figure 10a) whereas the Niño3.4 wavelet spectra are dominated by periodicities greater than 12 months. Remarkably, there is a significant power at a period band of 22-27 months from 2014 to 2017 with cross-spectra phase pointing at the leading position for Niño3.4, which indicates that the recent strong El Niño event may have started first before the response of NDVI months after the El Niño. The wavelet coherence between NDVI and DMI is observed to delineate some areas that have high significant power at periods of 2-16 months. It is also important to mention that there are significant peaks which are within the cone of influence at the period band 32-48 months during 2005-2007 and 2013-2014, respectively. The cross-wavelet phase during the years 2013-2014 indicates that the DMI was leading the NDVI. This significant peak seems to be similar to that observed in the The squared cross-wavelet power spectra for NDVI-Niño3.4, NDVI-DMI, NDVI-precipitation, NDVI-soil temperature, NDVI-NDII, and NDVI-ET. The continuous black lines demarcate the areas of significance at the 95% confidence level using the red noise model. The arrows are vectors indicating the phase difference between the cross-wavelet parameters (see the legend in the bottom left corner). The local wavelet coherence spectra together with their distinctive cross-spectra phase for NDVI-Niño3.4, NDVI-DMI, NDVI-precipitation, NDVI-soil temperature, NDVI-NDII, and NDVI-ET are shown in Figure 11. In general, all the wavelet coherence spectra indicate that Niño3.4, DMI, precipitation, soil temperature, NDII, and ET do have some degree of coherence with the HiP NDVI in a variety of both periods and timescales. However, it should be mentioned that because statistically, the significant correlation between any two variables being investigated could occur by chance, a significant commonality in a wavelet coherence spectra analysis does not necessarily imply interconnection. Moreover, there is a possibility of smaller areas of wavelet coherence occurring by chance, which would not indicate interconnection, whereas larger areas of significance are improbable due to chance. For this reason, further investigation is required in regard to a possible teleconnection between any two-time series. A study by Torrance and Compo [55] investigated the periodicities present in a much longer time series (1871-1996) of Niño3.4 using Morlet wavelets and reported the domination of periods greater than 12 months, with some episodes of shorter periods also present in their spectra. In this study, the wavelet coherence between NDVI and Niño3.4 indicates smaller or no areas of high power significance, which is understandable because the 16-year monthly mean NDVI time series is dominated by periodicities of less than 16 months (Figure 10a) whereas the Niño3.4 wavelet spectra are dominated by periodicities greater than 12 months. Remarkably, there is a significant power at a period band of 22-27 months from 2014 to 2017 with cross-spectra phase pointing at the leading position for Niño3.4, which indicates that the recent strong El Niño event may have started first before the response of NDVI months after the El Niño. The wavelet coherence between NDVI and DMI is observed to delineate some areas that have high significant power at periods of 2-16 months. It is also important to mention that there are significant peaks which are within the cone of influence at the period band 32-48 months during 2005-2007 and 2013-2014, respectively. The cross-wavelet phase during the years 2013-2014 indicates that the DMI was leading the NDVI. This significant peak seems to be similar to that observed in the Niño3.4-NDVI wavelet coherence spectra, which indicates that it is possible that the DMI and Niño3.4 time series were in phase during this period. If so, their joint effect could have maximized the browning observed during 2014-2016. The wavelet coherence between NDVI and precipitation, soil temperature, and ET indicates high significant power during most parts of the study record. In general, these spectra vectors are observed to have an in-phase relationship especially during the period band 8-18 months. This pattern is also observed in the distinctive periods which are less than 8 months especially for the period band 2006-2013. The NDVI and soil temperature wavelet coherence spectra delineate distinctive high power significance with an anti-phase relationship in a 2-8 months band during 2006-2014. Apart from the two distinctive period bands of 2004-2006 and 2015-2017 of high significant power during which the NDVI time series led the temperature time series during the period band 9-14 months, the annual cycle is dominated by the in-phase relationship. Both these scenarios indicate the possible teleconnection between the two time series. The dominant in-phase relationship in the NDVI-precipitation, NDVI-soil temperature, and NDVI-ET suggests that these parameters are positively correlated to the NDVI. This also indicates that the NDVI of the HiP follows the seasonal cycle of precipitation and temperature that is experienced in this region of southern Africa. As expected, the NDVI-NDII coherence spectra indicate a significant coherence at periods greater than 3 months, with a dominant in-phase relationship which indicates a strong correlation between NDVI and NDII. This is in agreement with the Pearson correlation coefficient results presented in Figures 7 and 8. Overall, factors such as DMI, Niño3.4, precipitation, soil temperature, NDII, and ET are shown to influence NDVI at different distinctive periods and timescales. During the La Niña years, the relationship between NDVI and precipitation and temperature seemed to not indicate any alarming patterns. However, during strong El Niño years (especially broad and strong El Niño years such as the 2014-2016), intense droughts occur. This condition is associated with less humidity and cloud cover, which allows for more solar radiation reaching the ground and accelerated evapotranspiration, which impedes photosynthetic activity. Conclusions Time series analyses methods were employed in this study to investigate the basic structure variability and trend of the HiP NDVI and its response to the variability of climatic conditions. The results of this study indicate that drought stress reaction patterns of vegetation within HiP provide temporal responses to climate variability, suggesting a strong causal influence. Both the NDVI and EVI values, averaged over the study area, decreased suddenly during 2014-2016 to their greatest minima of approximately 0.28 and 0.11, respectively, in 2015. The linear relationship between climatic indices and NDVI indicated that precipitation soil temperature, soil moisture at root-zone (NDII), ET and to some extent ENSO play a significant role in the variability of vegetation health. The Pearson correlation r and MLR p-value for precipitation and ENSO were found to be 0.45 and 2.0 × 10 −7 , and 0.27 and 8.4 × 10 −4 , respectively. While some studies [17] reported temperature as the main meteorological parameter that influences vegetation, in this study, we conclude that the influence of precipitation on vegetation was more significant. Different areas of the HiP are affected differently by the strong El Niño signal because of the special variation of land cover. The southern part of the HiP was affected the most because it is dominated by savanna. On the other hand, the northern part of the HiP seems to not be affected presumably because land cover in this area is dominated by forests which are composed of trees which are drought resistant. Moreover, terrain appears to have additional influence on the state of vegetation in the reserve. For example, the lower NDVI values corresponded with the 2014-2016 drought period, particularly in the south-western (flat) part of the reserve, whereas the northern parts (hilly) seem to have benefited from orographic precipitation which promoted vegetation growth. Terrain is also assumed to restrict wildlife grazing in hilly parts of the reserve where stable NDVI are noticeable, placing more burden in flat areas that are accessible to most grazers. The Mann-Kendall trend significance test and the sequential version of the Mann-Kendall test statistic revealed a significant decreasing pattern of NDVI during the extreme drought periods of 2003 and 2014-2016, with unprecedented lowest minimum values of NDVI detected in 2015. This study has also demonstrated how the wavelet coherence signal processing technique can serve in identifying periodicities in NDVI time series and can also help demonstrate the temporal response of vegetation status to environmental disturbances. The wavelet coherence power spectra indicate a strong influence of precipitation, soil temperature, soil moisture, and ET on the viability of NDVI. This was revealed by a dominant in-phase relationship between the climatic variables and NDVI, which suggests a positive correlation. While the El Niño of 2014-2016 was both extended and strong, it is possible that its influence in the study area was also supported by a corresponding positive DMI peak which took place at the same time with the with the 2014-2016 El Niño period. It is, therefore, desirable to use the wavelet coherence technique and other methods to investigate the phase relationship between ENSO and DMI for determining the corresponding influence of rainfall in the north-eastern part of South Africa. Finally, we conclude that the recent intense drought of 2014-2016 influenced the spatiotemporal pattern of the vegetation condition in the HiP. This holds more implications for the tourism potential of the HiP with attractive grazers such as white rhinos and buffalos that were reportedly affected by this event [11]. The results portend that the freely GEE-archived satellite data is a capable tool for monitoring droughts with a high temporal resolution across game reserves located in drought-prone areas of South Africa and other parts of the world.
2018-10-26T05:38:15.390Z
2018-09-26T00:00:00.000
{ "year": 2018, "sha1": "f93528bca7b1687a46daf55220bf0b134e0d18aa", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2225-1154/6/4/95/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "015dec1e262d355f818b99c575bc13e22da7d111", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Economics" ] }
236159678
pes2o/s2orc
v3-fos-license
Dynamic computer simulations of electrophoresis: 2010–2020 Abstract The transport of components in liquid media under the influence of an applied electric field can be described with the continuity equation. It represents a nonlinear conservation law that is based upon the balance laws of continuous transport processes and can be solved in time and space numerically. This procedure is referred to as dynamic computer simulation. Since its inception four decades ago, the state of dynamic computer simulation software and its use has progressed significantly. Dynamic models are the most versatile tools to explore the fundamentals of electrokinetic separations and provide insights into the behavior of buffer systems and sample components of all electrophoretic separation methods, including moving boundary electrophoresis, CZE, CGE, ITP, IEF, EKC, ACE, and CEC. This article is a continuation of previous reviews (Electrophoresis 2009, 30, S16–S26 and Electrophoresis 2010, 31, 726–754) and summarizes the progress and achievements made during the 2010 to 2020 time period in which some of the existing dynamic simulators were extended and new simulation packages were developed. This review presents the basics and extensions of the three most used one‐dimensional simulators, provides a survey of new one‐dimensional simulators, outlines an overview of multi‐dimensional models, and mentions models that were briefly reported in the literature. A comprehensive discussion of simulation applications and achievements of the 2010 to 2020 time period is also included. Introduction The development of simulation models for electrophoresis has been underway for more than 40 years. Shortly after computers became available, scientists at universities in the Czech Republic (Charles University in Prague, work of B. Gaš [1]), Switzerland (University of Bern in Bern, work of P. Ryser [2]), and the United States (University of Arizona in Tucson, work of G.T. Moore [3]) began to construct dynamic computer models for electrophoresis with the goal of exploring the basics of electrokinetic separations. These early models were restricted to strong electrolytes and can be regarded as the first dynamic electrophoretic simulation models. The first dynamic models predicting the behavior of strong and weak electrolyte systems were developed in the 1980s by Bier et al. [4][5][6], Radi and Schumacher [7], Roberts [8], and Schafer-Nielsen [9]. Alternatively, computer models for the prediction of steady-state distributions in IEF [10][11][12][13] and ITP [14,15] emerged in the same time period. Over the years, many dynamic simulation models of various degrees of complexity have been described in the literature. The history of dynamic computer simulation in electrophoresis together with a survey of the models published up to 2009 was given in a previous review [16]. The fundamentals of computer simulations of electrophoretic processes, including comprehensive simulation examples and applications, were the subject of a book [17], a review paper [18], and two book chapters [19,20]. Furthermore, the use of simulations for the prediction of separations in IEF [21], instabilities in IEF [22], microfluidic ITP [23], and chiral separations [24] was discussed in the literature. Dynamic simulations were also mentioned in the context of theoretical principles of capillary electromigration methods [25]. Simulation efforts were and still are driven by a desire to understand the fundamentals of all electrophoretic separation methods, including moving boundary electrophoresis, CZE, CGE, ITP, IEF, EKC, ACE, and CEC. Underlying chemical and physical processes that are involved in the separation of compounds using these techniques can thereby be identified. The transport of components in liquid media under the influence of an applied electric field can be described with the continuity equation that represents a Dynamic simulators for electrokinetic separations Dynamic simulators are based upon algebraic acid-base and/or complexation equations and continuity equations, which are partial differential equations in time and space that can only be solved numerically using computers. Such models calculate the transport of each component through the electrophoretic space as a result of electromigration, diffusion, imposed, and/or electrically driven bulk flow, solutionbased chemical reactions such as protolysis and, if incorporated, also interaction of solutes with electrolyte additives, the column walls, and the column matrix. The inclusion of the diffusion current, i.e., the current carried by diffusion, has to be part of the transport equations of a dynamic electrophoretic simulator. This is illustrated with the simple cationic ITP example presented in Fig. 1. It represents a system with 10 mM potassium acetate as leader, 10 mM lithium acetate as terminator, and The insert in the conductivity data graph depicts the transition between K + and Na + . The data were produced with GENTRANS without using the smoothing option. Simulation conditions: 1 cm column divided into 4000 segments ( x = 2.5 μm), 2000 A/m 2 applied for 0.08 min. The mobilities of potassium, sodium, and lithium were taken as 7.91 × 10 -8 m 2 /Vs, 5.19 × 10 -8 m 2 /Vs, and 4.10 × 10 -8 m 2 /Vs, respectively. The pKa and mobility of acetic acid were 4.76 and 4.12 × 10 -8 m 2 /Vs, respectively. The cathode is to the right. concentrations at the column ends were kept constant and no data smoothing was used. Upon application of power, the sodium zone becomes adjusted to a concentration of 8.48 mM as it gradually penetrates into the space originally occupied by the leader where it migrates between potassium and lithium. The same applies to lithium, the terminating constituent, which becomes adjusted to a concentration of 7.59 mM. Thereafter, the entire ITP zone structure is migrating at a constant velocity and without change of the zone structure toward the cathode. The data presented in Fig. 1 represent computer-predicted profiles after 0.08 min (4.8 s) of the current application. These data illustrate that sharp boundary transitions with thicknesses of about 20 μm are formed that are not associated with numerical oscillations (Fig. 1A and D). If the current carried by diffusion is included in the calculations (solid black lines in Fig. 1), the pH change between the zones is predicted to be sigmoidal (Fig. 1C). Without the diffusion current in the transport equations, nonphysical spikes are predicted in the zone boundaries of the pH profiles [39,40] that is illustrated with the red dotted line profiles in Fig. 1C. In all other panels, the overlaid profiles of data obtained with (solid black lines) and without (dotted red lines) the diffusion current also reveal small differences in the boundary shapes, as is best illustrated with the conductivity (insert in Fig. 1B) and concentration profiles (Fig. 1D) depicted for the transition between potassium and sodium. This system can be employed as a test to investigate whether the diffusion current is correctly included [41] and is recommended to developers of new models. The execution of a dynamic simulation requires the specification of the initial distributions of all compounds, the physico-chemical input data of the compounds in terms of mobilities and chemical equilibrium constants, the column geometry and its mesh, the input data for buffer flow or calculation of EOF, the applied constant voltage or constant current, and the time of power application together with the data storage interval. In performing a simulation, there are some points that the researcher must understand in order to maximize the utility of their simulation. Selection of the simulation parameters, such as the number of segments and the applied power level, is crucial for success. These aspects are well described in various publications [18,20,42] and are thus not discussed again in this review. One-dimensional models A survey of frequently used and new one-dimensional dynamic simulators is presented in Table 1. GENTRANS [4][5][6][28][29][30][31], SIMUL5 [27], and SPRESSO [32,33] are by far the three most employed electrophoretic simulators. Although they differ in a number of specifications, they were shown to provide identical results when used with the same input conditions [42]. This is illustrated with the simulation data presented in Fig. 2. To compare the output of the three simulators, a uniform grid was employed to predict the shape of an ITP boundary at a high current density that is a demanding task. The data presented in Fig. 2 represent the anionically migrating ITP boundary between chloride and HEPES with Tris as a counter component, an example that was discussed in refs. [32,42]. The anolyte comprised 100 mM hydrochloric acid and 200 mM Tris, whereas the catholyte was composed of 50 mM HEPES and 200 mM Tris. Simulations were performed in a 2 cm column divided into 16 000 segments of equal length ( x = 1.25 μm) and via application of a constant current density of -5000 A/m 2 for 6 s. GENTRANS was executed without using data smoothing and SPRESSO was run with the high-resolution sixth-order compact scheme. The initial boundary between anolyte and catholyte was at 20% of column length. The data presented reveal that the concentration profiles of the three components across this migrating steady-state boundary of about 20 μm width are equal. GENTRANS is the oldest of the three models [4][5][6]28]. It was adapted to be executed on PCs, extended with new features [29][30][31][43][44][45][46][47][48][49][50][51], made applicable for high-resolution simulations [50,[52][53][54], and extended for the simulation of micellar EKC [55,56]. Current versions run under Windows (XP, 7 [64-bit; 32-bit until 2018 only], and 10) and are available from the corresponding author upon request. GENTRANS does not feature a shell with a user-friendly possibility to input or change parameters and to graphically display the progress of an ongoing simulation. GENTRANS comprises seven separate modules, namely those for simple biprotic ampholytes, monovalent weak acids, monovalent weak bases, monovalent strong acids, monovalent strong bases, peptides (used for peptides and other multivalent components), and proteins. This is an attractive approach because the operation of the monovalent codes is faster compared to that for multivalent components that reduces computation time for monovalent components and simple biprotic ampholytes and thereby proved to be beneficial for the simulation of IEF in presence of a large number of carrier ampholytes [50,52,53]. Simulations with GENTRANS can be executed using several options for the column boundaries: (1) with open column ends which allow mass transport into and out of the separation space; (2) with fixed concentrations at column ends; (3) with column ends that are impermeable to any buffer and sample compounds; and (4) with mixed conditions with no transport or free transport of each component at both left and right boundaries. GENTRANS offers the possibility of using data smoothing (removal of negative concentrations), which can speed up a simulation and avoid numerical oscillations [42]. In GENTRANS, electrophoretic mobilities are considered to be independent of the ionic strength and temperature. Furthermore, GENTRANS features algorithms that estimate the magnitude and impact of electroosmosis in CE via the use of inputs that are dependent on the column wall material and the ionic strength of the electrolyte [30,31,[46][47][48] and allows swapping the content in a part of the column with a new electrolyte prior to continuation of a run [49,50]. Extensions of GENTRANS that were implemented during the 2010-2020 time period include Taylor-Aris [28,29,43], imposed plug flow [44,45], in situ calculated EOF [30,31,[46][47][48], swapping of electrolyte in part of the column prior to continuation of a run [49,50], flux corrected transport (flux limiter) for comparison of flux corrected transport with upwind and second order centered scheme [40,51], high-resolution version with addition of multivalent components and simple input of set of ampholytes for IEF [50,[52][53][54], micellar EKC interactions [55,56] • Addition of Taylor-Aris dispersion [57] • Addition of complexation equilibria with monovalent components [ dispersion to account for dispersion due to the parabolic flow profile associated with pressure-driven flow [57] and chemical interactions between components required for the prediction of chiral separations [58]. With the former feature, effective diffusivity of analyte and system zones as functions of the capillary diameter and the amount of flow in comparison to molecular diffusion alone can be studied for configurations with concomitant action of imposed hydrodynamic flow and electroosmosis. Data obtained under realistic experimental conditions, for example, a 50 μm id fused-silica capillary of 90 cm total length, revealed that inclusion of flow profile-based Taylor-Aris diffusivity provides simulation patterns of analyte and system peaks that compare well with those monitored experimentally with UV and conductivity detection [57]. The GENTRANS EKC code used to simulate separations, transient trapping, and sweeping in micellar EKC [54,55] was extended by Breadmore et al. for chiral separations via consideration of complexation constants and specific mobilities of 1:1 analyte-selector complexes [58]. The model handles interactions between monovalent weak and strong acids and bases with a single monovalent weak or strong acid or base additive, including a neutral CD, under real experimental conditions. It is a tool to investigate the dynamics of chiral separations and to provide insight into the buffer systems used in chiral EKC and ITP together with features of analyte stacking and destacking. Furthermore, an option that permits the simulation of chiral CEC in which the selector zone remains immobile even under conditions of EOF was incorporated. Together with complex mobilities set to zero to provide an electrophoretically immobilized selector, this approach can be employed to study the CEC behavior of analytes within the stationary phase and at the transitions between the stationary phase and free solution [59]. SIMUL5 of Hruška et al. appeared in 2006 [27]. It is based on one equation that handles protolysis of all components [26], comprised a newly designed model, and replaced previous SIMUL models that were of limited scope (see ref. [16] for overview). SIMUL5 runs under WINDOWS and can be downloaded from https://echmet.natur.cuni.cz. It features the use of a reduced calculation space with moving borders that bracket the separation space with changing concentrations (part of the column where considerable changes from the initial values are expected), an approach that leads to faster simulations. SIMUL5 has a comfortable windows environment for data input, data evaluation, visual control of the ongoing simulation, and visualization of a completed simulation in a movie format. SIMUL5 offers the option of considering mobilities as a function of ionic strength and permits the implementation of a constant buffer flow with a flat flow profile to mimic EOF. It features boundary conditions for open column ends, which allow mass transport into and out of the separation space. Non-released versions of SIMUL5 include closed boundary conditions at column ends in a column divided into an array of compartments with identical or different cross sections in which the solutions are mixed and permitted simulation of isoelectric trapping separations and desalting that take place in recirculating multicompartmental electrolyzers [60], and the possibility of simulating micellar EKC systems (Table 1). During the 2010 to 2020 time period, SIMUL5 was extended with algorithms that describe 1:1 chemical equilibria between solutes and a buffer additive with fast interactions that can be considered instantaneous in comparison to the time scale of peak movement [61,62]. This version of SIMUL5, referred to as SIMUL5complex, handles equilibria with neutral, singly charged, and multiply charged complexation reagents, is more versatile than the EKC possibility of GENTRANS, and was extensively applied to describe chiral interactions, enantiomer separations, and electromigration dispersion effects caused by complexation [24,36,37,[63][64][65]. SPRESSO is a MATLAB based, fast open-source nonlinear electrophoresis solver that appeared in 2009 and is available for free at http://microfluidics.stanford.edu/spresso [32,33]. It is based on the same general component equation as SIMUL5, features an adaptive grid system to reduce the computational time interval in configurations with a few components but not in presence of a large number of constituents that are unevenly distributed throughout the column, and has visual control of the ongoing simulation. Thus, the use of SPRESSO with its adaptive grid can be beneficial for rapid predictions of configurations with sharp sample or buffer boundaries that are confined to a small section of the electrophoretic column. SPRESSO is written in the secondary programming language of MATLAB (MathWorks). The compiled version of SPRESSO can be run on Windowsbased PCs via installation of the MATLAB Component Runtime (MCR) library. Data evaluation, however, requires MATLAB. In the first version of SPRESSO, selections for a simulation included four different numerical schemes for spatial discretization, e.g., a high-resolution sixth-order compact scheme, a model for pressure-driven flow and Taylor-Aris dispersion, an approach to estimate electroosmosis, as well as a moving frame [32,33]. The second version featured the option of considering mobilities as a function of ionic strength and permitted the input of a convergence tolerance value [66]. Thereafter, SPRESSO was extended to simulate electrokinetic processes in channels with nonuniform cross-sectional areas and was thereby converted into a quasi 1D model [67]. In this third version, the quasi 1D governing equations are solved with a dissipative finite volume method based on a symmetric limited positive (SLIP) scheme together with an adaptive grid algorithm. This approach is claimed to ensure both unconditional stability and high accuracy and allows to perform fast simulations at high electric field strengths with a small number of grid points. Subsequently, the nonlinear electromigration part of SPRESSO was coupled to multispecies nonequilibrium kinetic reactions [68]. This 1D simulation tool is applicable to both homogeneous reactions and surface-based reactions. It provides an aid to assist the development, analysis, and optimization of electrophoresisbased biosensors and assays involving nonequilibrium chemical reactions, such as immunoassays. The space-time conservation element and solution element (CESE) model of Yu et al. is claimed to be more accurate than conventional numerical schemes, suppresses the numerical oscillations or peaks observed in the results obtained using traditional second-order finite difference schemes, and was applied to high-resolution simulations of CZE and ITP [69]. This simulator was first extended to handle columns with uniform or variable cross-sectional areas (1D reduced model for microchannels) and applied to high-resolution simulation of IEF [70]. The last development encompasses the combination of an adaptive mesh redistribution with CESE that was successfully applied to high-resolution ITP and IEF simulations [71]. No further developments or applications of the CESE method were found in the literature. SIMUL6, the successor of SIMUL5, was recently released as free software that can be downloaded from https: //echmet.natur.cuni.cz and https://www.simul6.app. A completely new source code was written. It includes faster pro-cedures for the numerical integration of partial differential equations and uses multithreaded computation. This made the simulator up to 15 times faster compared to SIMUL5 [72]. SIMUL6 runs on Windows-based 64-bit computers, Linux 64-bit, and macOS 64-bit, and it has a user-friendly interface to input all parameters. Simulations in progress can be followed graphically with three in situ data frames for (i) the distributions of all components, pH, conductivity, and the electric field strength along the column, (ii) the current and voltage as a function of time, and (iii) the signal of a detector placed at a selected position along the column. The detector graph encompasses signals for components, conductivity, and pH. SIMUL6 has a number of very useful features. The separation column can be divided into a number of individual sections and the diameter of each section can be set individually. In all segments, transversal diffusion is assumed to be infinitely fast. Thist represents a quasi 1D approach that was used previously in other models [60,67,70]. Having larger diameters at the column ends permits the simulation of the impact of electrode compartments. Furthermore, mobilities and pKa values of the components can be specified for different column segments that, e.g., allows the study of mass transport across a boundary between free solution and a zone containing a neutral gel in which migration is retarded [73]. Swapping of a part of the electrolyte with another electrolyte after some time during the run is a procedure that is useful to study electrophoretic mobilization in IEF [50] or to characterize ITP condensation followed by CZE separation [49]. These optional features are of interest in various situations of chip electrophoresis. The current version of SIMUL6 (version 6.1.2) neither includes electroosmotic and/or hydrodynamic flow along the column nor complexation reactions. SIMUL6 comprises a convenient input routine for a large number of carrier ampholytes that produce the pH gradient in IEF. As an example, focusing data of 14 amphoteric analytes together with 182 carrier ampholytes and two spacing components between 300 mM H 3 PO 4 (anolyte, between 0 and 9 mm of column length) and 200 mM LiOH (catholyte, between 63 and 72 mm of column length) are presented in Fig. 3. The input data of all components used are those provided as example in SIMUL6 [72] and stemmed from ref. [74]. Focusing occurred during 500 s at 600 V in a 50 μm id column of 72 mm length that was divided into 6000 segments of equal length (12 μm mesh). Thereafter, the cathodic electrode solution was replaced with the anolyte and electrophoretic (chemical) mobilization was induced by a continuation of power application (600 V) for 700 s. These data reveal that (i) the 14 markers are separated according to their pI and (ii) electrophoretic mobilization can change the detection sequence of amphoteric analytes, i.e., the analytes do not necessarily pass the point of detection in the order of decreasing pI values when mobilization is induced by exchanging the catholyte with the anolyte. The SPYCE (Pseudo-spectral Python Code for Electrophoresis) simulator is a new model that uses the Fourier were supplied as homogeneous mixture between 300 mM H 3 PO 4 (anolyte, between 0 and 9 mm of column length) and 200 mM LiOH (catholyte, between 63 and 72 mm of column length). A 50 μm id column of 72 mm length that was divided into 6000 segments of equal length (12 μm mesh) was employed. A constant voltage of 600 V was applied for 500 s. For all carrier ampholytes, pKa was 1 and the pKa step was 0.0406 for each following ampholyte. The cationic and anionic mobilities of all ampholytes were set to 20 × 10 -9 m 2 /Vs. The input data of the analytes used are those provided in ref. [74]. The data presented include the distributions of (A) all components, (B) pH and electric field strength, (C) the foci of the 14 analytes between IDA and ARG, and (D) the electrolytes and spacing components. Electrophoretic mobilization toward the cathode was induced by replacing the cathodic electrode solution with the anolyte followed by a continuation of power application at 600 V for 700 s. The detector plot for the 14 analytes and a detector placed at 57.6 mm is presented as insert in panel D. The cathode is to the right. Key: 1, serotonin (pI 10.58); 2, tyramine (10.17); 3, metanephrine (9.72); 4, epinephrine (9.32); 5, norepinephrine (9.21); 6, labetalol (8.49); 7, 3-methylhistidine (7.71); 8, glycyl-histidine (7.55); 9, leucoberbelin blue I dye (5.27); 10, 4-(4-aminophenyl) butyric acid (4.86); 11, dansylated γ-aminobutyric acid (4.20); 12, dansylated glutamic acid (3.49); 13, dansylated aspartic acid (3.34); 14, dansylated iminodiacetic acid (2.99). • Parallel implementation with multiple CPUs for IEF simulations [85] and parallel scheme efficient algorithm for 2D IEF [86] • Effect of Joule heating on IEF [88] • 2D model for free flow IEF [89] • 2D model of ITP in channels of changing cross-sectional area [91]; quasi-steady state of ITP in complex microgeometries [92] and effect of Joule heating in ITP [93] CFD-ACE+ with user models Finite-element • Effects of electrode configuration and setting on electrokinetic analyte injection in CZE [94,95] • Analyte behavior in a curved channel [96] • Electrokinetic supercharging [97], behavior of DNA in CGE after electrokinetic injection [98], conditions at the capillary tip during field-amplified sample injection [99] • pH gradient formation and cathodic drift in microchip [100] COMSOL Multiphysics with electrophoresis interface Finite-element • 1D simulations of electrophoretic separations [41] COMSOL Multiphysics with user implemented models Finite-element • ITP of proteins in a networked microfluidic chip [101] • Models for studying various aspects of ITP and other CE modes in microchannels [102][103][104][105][106][107][108][109][110][111][112][113] • Nanochannel electrophoresis model allowing deviation from electroneutrality [115] PETSc-FEM Finite-element • 3D simulation model for electrokinetic flow and transport in microfluidic chips [116] • 3D model for high-performance simulation of electrophoretic techniques in microfluidic chips [117] OpenFOAM with user implemented models Finite-volume • Open-source toolbox for 3D electrophoretic separations [118] • Model of electrokinetic transport in microfluidic paperbased systems [119,120] pseudo-spectral method for fast high-resolution numerical simulations of a periodic situation in which the value of any spatially varying physical quantity at the beginning of the column is equal to that at the end [75]. It can thus be employed for the study of CZE, transient ITP within a sample zone, field-amplified sample stacking, and oscillating systems, but not for ITP, moving boundary electrophoresis and IEF. The Fourier pseudo-spectral method yields accurate and stable solutions on coarser computational grids compared with other nondissipative spatial discretization schemes. SPYCE can be obtained for free as open-source code at http://web.iitd.ac. in/ ∼ bahga/SPYCE.html. Furthermore, a dynamic mathematical simulator of CE of unreported sophistication was developed in Russia by Sursyakova et al. and applied to the study of system peak formation and physical properties of migrating analytes [76,77]. Bahga et al. described a diffusion-free ITP model for channels with axially varying cross-sections that predicts properties of homogeneous zones but not those of electrophoretic boundaries [78]. Finally, Zhang et al. developed a stump-like mathematical model for computer simulation of CZE [79]. Multi-dimensional models A survey of frequently used and new multi-dimensional dynamic simulators is presented in Table 2. The finite volumebased 2D model of Shim et al. was developed from scratch at the Washington State University [80][81][82][83]. It represents a solver that predicts IEF separations in closed microchannels (absence of electrode solutions, for details about these boundary conditions, see also ref. [22]) and was, e.g., used to study the behavior of protein bands in a horseshoe microchannel [84]. These simulations require an enormous number of calculations and, with a large number of components, can thus last up to a couple of months of calculation time. The 2D simulator was extended for faster IEF simulations with the parallel use of multiple CPUs [85], with an efficient algorithm of a parallel scheme [86], and was used for the validation of a steady-state protein focusing model [87]. Other developments included the integration of Joule heating to study its effect on the IEF of proteins in a microchannel [88] and a mathematical and numerical model to study 2D free flow IEF [89]. With a proper change of the boundary conditions at the column ends, the same approach could be employed for studying the ITP process in straight channels [90] and in those featuring changes of the cross-sectional area [91]. Furthermore, the 2D model was applied to determine a quasi-steady state in complex microgeometries operated under constant voltage [92] and was extended to include temperature effects in the constant voltage mode of ITP in a microchannel [93]. Besides the dedicated 2D simulator for electrophoresis that was developed at the Washington State University, multiphysics simulation packages, such as CFD-ACE+, COMSOL Multiphysics, and OpenFOAM, were employed for the simulation of a variety of nonlinear electrophoresis techniques ( Table 2). They enable both one-and multi-dimensional high-resolution dynamic simulations of electrophoresis via the implementation of individual models into the framework of the multiphysics packages. Furthermore, in order to increase the accessibility of multidimensional electrophoresis simulations, COMSOL Multiphysics was extended by an electrophoretic transport interface that was first released in COMSOL Multiphysics version 5.3. The performance of the interface in COMSOL Multiphysics was investigated by comparing 1D results with those generated with GENTRANS and SIMUL5. The profiles were found to be essentially identical that confirms that the algorithms incorporated into COMSOL Multiphysics are able to correctly describe the dynamics of electrophoretic processes [41]. The assembly and solver routines used by COMSOL Multiphysics are multithreaded. Using fast multicore PCs, the time intervals required for the simulation of the investigated examples were found to be comparable to those with GENTRANS. The COMSOL Multiphysics electrophoretic transport interface can be expected to be useful as a general model for the investigation of electrophoretic phenomena in 1D, 2D, and 3D geometries. The CFD-ACE+ software (version 2006, CFDRC, Huntsville, AL, USA) with a 2D finite element based electrophoretic model was previously used to characterize the effects of electrode configuration and setting on electrokinetic analyte injection in high-resolution CZE [94,95]. During the 2010 to 2020 period, this model was applied to explore the migration behavior of the analytes with or without ITP stacking in a curved channel [96], to study electrokinetic supercharging with a system-induced terminator and an optimized capillary versus electrode configuration for partsper-trillion detection of rare-earth elements in CZE [97], to investigate DNA aggregation and cleavage in CGE induced by a high electric field during electrokinetic sample injection [98], and to describe the conditions at the capillary tip during field-amplified sample injection with a mobility-boost effect [99]. Furthermore, the same simulator was employed to investigate the pH gradient formation between anolyte and catholyte reservoirs together with its cathodic drift in microchip IEF with imaged UV detection [100]. COMSOL Multiphysics is a commercial multiphysics finite-element simulator from Sweden (COMSOL AB, Stockholm, Sweden). This package was employed by various researchers working in the field of electrophoretic separations who applied their own models. At the Washington State University, COMSOL Multiphysics was employed to investigate various aspects of ITP in microchannels, including the behavior of proteins in comparison to CZE after they pass through a T-junction (2D simulations, [101]), a 10 000-fold concentration increase of proteins in a cascade microchip (3D, [102]), separation of lanthanides in presence of hydroxyisobutyric acid and acetate as complexing agents (1D, [103]), separation of eight lanthanides on a serpentine PMMA microchip (2D, [104]), and separation and concentration of fluorescently tagged cardiac troponin I from two proteins with similar isoelectric properties in a PMMA straight-channel microfluidic chip (1D, [105]). Another effort involving COMSOL Multiphysics simulations led to the assessment of the effect of counterflow in ITP performed in an open capillary (2D, [106]) and in a monolithic column [107]. In the latter work, the fluidflow profile in a porous medium was simulated using the Brinkman Equation built into COMSOL Multiphysics. Counterflow ITP in the monolithic column showed undistorted analyte zones with significantly reduced dispersion compared to the severe dispersion observed in an open capillary. The data presented in Fig. 4 represent those obtained for free solution anionic ITP of fluorescein with high molecular diffusivity (4.25 × 10 -10 m 2 /s) and R-phycoerythrin (RPE) with a low molecular diffusivity (0.157 × 10 -10 m 2 /s) in the absence of counterflow flow (Fig. 4A) and in presence of a counterflow that compensates anionic migration such that analyte zones become immobile ( Fig. 4B and C). Figure 4D depicts a schematic illustration of the electromigration of an analyte counteracted by a parabolic counterflow. The electromigration velocity of the analyte is uniform whereas the flow induced by pressure or gravity is of parabolic shape. The composite migration velocity of the analyte is the superposition of these two velocities. The leading electrolyte is to the left, and the terminating electrolyte to the right. The applied current density is 226.4 A/m 2 and the average flow velocity is 1.45 × 10 −4 m/s. In absence of an applied counterflow, fluorescein and RPE are migrating as narrow peaks with fluorescein ahead of RPE (Fig. 4A). With a parabolic counterflow profile and molecular diffusivity (images (i) and solid line graphs in panels B and C) somewhat broader and distorted zones are predicted with the flow effect being significantly larger for the case of RPE with the much lower diffusivity. With a plug counterflow profile and Taylor-Aris effective diffusivity (images (ii) and broken line graphs in panels B and C), peak distortion for both cases is predicted to be larger compared to the predictions for the parabolic counterflow. This indicates that Taylor-Aris diffusivity is somewhat overestimating peak broadening [106]. In other research groups, COMSOL Multiphysics simulations were used to investigate diffusion-dependent focusing regimes in peak mode counterflow ITP [108], sample dispersion in ITP [109], analyte stacking in a continuous sample flow interface under conditions approaching quantitative electrokinetic injection from the entire sample volume [110], mass transport in a micro flow-through vial of a junction-atthe-tip in CE-MS interface [111], pressure-assisted capillary [106]. The anode is to the left. From ref. [106]; copyright Wiley-VCH GmbH; reproduced with permission. electrophoresis frontal analysis for faster binding constant determination [112], the scaling behavior in on-chip fieldamplified sample stacking [113], and electrolysis phenomena occurring at the interface between electrode and electrolyte that have an impact on the electrode environment [114]. Very recently, a 2D mathematical model of electromigration that considers the deviation from electroneutrality in the diffuse layer of the double layer when the liquid phase is composed of a solution of weak electrolytes of any valence and complexity was developed, integrated into COM-SOL Multiphysics and applied to the prediction of electromigration in nanochannels [115]. Its outcome was demonstrated by the numerical simulation of the double layer composed of a charged silica surface and an adjacent liquid solution composed of weak multivalent electrolytes. The validity of this model is not limited to the diffuse part of the double layer but is valid for electromigration of electrolytes in general. Kler et al. designed 3D high-performance simulation models to study electrokinetic flow and transport [116] and electrophoretic separation techniques [117] in microfluidic chips that were performed with a parallel multiphysics code referred to as PETs-FEM within a Python programming environment. These models employed parallel computing and were found to compute faster than COMSOL Multiphysics. As this kind of approach is not widely available to the scientific community, an open-source toolbox for electromigration separations using the platform OpenFOAM was developed [118]. The OpenFOAM (Open Field Operation and Manipulation) platform of OpenCFD Ltd (https://www.openfoam. com) is a free open-source project for the solution of multiphysics problems based on the finite volume method. It offers native 3D support, EOF calculation, automatic parallel and supercomputing support, and is licensed under the GNU general public license. Recently, a complete mathematical model for electromigration in paper-based analytical devices was constructed [119] and applied under the same framework [120]. Flow dynamics in ITP, i.e., the impact of the mismatch of EOF between leading and trailing electrolytes on boundary distortion in ITP with concomitant EOF, was assessed in the research group of Hardt et al. by means of 2D numerical modeling. A finite-element model implemented on COMSOL Multiphysics provides EOF-driven flow patterns and ion concentration profiles [121]. The same program was employed to validate an analytical approximation for the flow field in the vicinity of an ITP transition between electrolytes of different mobilities. The resulting convective ion transport inherently reduces the resolution of ITP separations [122]. Sample dispersion in ITP with Poiseuille counterflow was studied with a 2D finite-volume model and used to validate a 1D model for the area-averaged concentrations that is based on a Taylor-Aris-type effective axial diffusivity [123]. In other efforts, Monte Carlo simulations were used to describe electrodynamic dispersion of sample streams in free-flow zone electrophoresis [124] and broadening of analyte streams due to a transverse pressure gradient in free-flow IEF [125]. An OpenFOAM fluid dynamic simulation model was developed for the description of the local interaction of hydrodynamics and Joule heating in annular CEC [126]. IEF in a multielectrode OFFGEL setup was simulated in a 2D domain with COMSOL by solving the time-dependent Nernst-Planck differential equation [127]. Previous modeling of IEF in an OFFGEL multicompartment cell was implemented on the finite element commercial software Flux-Expert (Astek Rhône-Alpes, Grenoble, France) [128]. A simple, diffusionbased mathematical model for dynamic computer simulation of free-flow zone electrophoresis was developed and implemented in the Delphi XE2 environment. The model was used to simulate operational parameters (e.g., electric field, flow rate, and pH) for the prediction of amino acid [129] and protein [130] separations. Models to study the electrokinetic transport (mainly EOF) at intersections of microfluidic chips, such as that of Yang et al. implemented into the CFD-ACE+ solver and used to study geometry and voltage parameters to avoid sample leakage [131], are not elaborated here. Applications The applications discussed in this chapter refer to simulations performed with the one-dimensional simulators GENTRANS, SIMUL5, and SPRESSO. Work with multidimensional applications is referred to in Section 2.2. together with the respective solvers. Zone electrophoresis, isotachophoresis, moving boundaries, and analyte stacking Dynamic simulation with SIMUL5 [132] and GENTRANS [133] was employed to predict the developments of peaks during the CZE separation of analytes that were monitored with setups comprising arrays of 16 and 8 contactless conductivity detectors, respectively, along the capillary. In the second case, the GENTRANS code featuring Taylor-Aris diffusivity to account for dispersion due to the parabolic flow profile associated with pressure-driven laminar flow [57] was employed and found to predict realistic conductivity detector profiles for the migration and separation of cationic and anionic analyte and system zones in presence of an imposed constant buffer flow and a small EOF. For configurations with discontinuous buffer systems, including ITP, experimental data obtained with the array detector revealed that the EOF is not constant. Comparison of simulation and experimental data of ITP systems provided the insight that the EOF can be estimated with an ionic strength-dependent model similar to that previously used to describe EOF in fused-silica capillaries dynamically double coated with Polybrene and poly(vinylsulfonate) [133]. SIMUL5 was employed to provide a comparison of peak shapes predicted by simulation with those of a new nonlinear model of electromigration for the analysis of two comigrating fully dissociated analytes [134]. For the systems studied, an almost perfect agreement was obtained. When the velocity of the separating analyte depends on the concentration of the co-analyte, the consequence is a mutual influence and a distortion of both analyte zones. Unwanted effects of carbonate in CZE of anions at high pH were assessed with SIMUL5 and validated experimentally [135]. Carbon dioxide absorbed from air into BGEs and samples is thereby shown to form additional zones and/or boundaries that may induce strong and pronounced temporary changes in the migration of analytes. The presence of millimolar amounts of carbonate in an alkaline BGE (i) changes the pH of the BGE which results in changes of effective mobilities, (ii) produces system peaks, dips, or more complex disturbances in the detection signal, and (iii) interacts with the sample components that can substantially modify the course of the separation process and its result. Detailed investigation of possible effects of carbonate for a given sample and BGE by computer simulation is very useful to prevent problems associated with the analysis of specific analytes. The computer prediction of the formation and migration of system peaks was also the subject in other CZE investigations [24,57,58,76,136,137]. In one particular study, online preconcentration of weak electrolytes at the pH boundary induced by a migrating system zone was investigated using SIMUL5 which provided a detailed understanding of the process [136]. In another interesting work, system peaks were deliberately generated via the addition of Li and Cs into the BGE for use as internal standards to improve quantification in microchip electrophoresis with conductivity detection [137]. As discussed in our previous review, dynamic simulation can effectively be used to investigate the electrokinetic processes occurring during the different modes of analyte stacking [18]. This application was continued in the 2010-2020 time period. Wang et al. [138] used nicotine as the test compound to compare predictions made with SIMUL5 with real experiments for two types of dynamic pH junctions. It could be demonstrated that focusing of at least 95% of the injected target molecule was achievable for both cases. In another project, simulations of pH barrage junction focusing were employed for its optimization and application to weakly alkaline or zwitterionic analytes [139]. A simulation study performed with the same simulator dealt with the concentration of weak acids in a pH boundary formed between sodium borate pH 9.5 and sodium phosphate pH 2.5 electrolytes [140]. Other input data are given in [144]. From ref. [144]; copyright Wiley-VCH GmbH; reproduced with permission. It revealed that the preconcentration is due to dissociation changes of the analytes and transient ITP. Furthermore, computer simulations with SIMUL5 were employed to characterize the field-enhanced sample injection process and associated formation of moving boundaries in a process that was coupled to sweeping [141]. In the context of the CZE analysis of imatinib in plasma samples, computer simulations with SIMUL5 were used to confirm the experimental results and to understand the electrophoretic behavior of imatinib in the presence of NaCl [142]. In another laboratory, SIMUL5 simulations were employed to examine several electrolyte configurations: investigation of the role of counter-ions in the BGE for analysis of cationic weak bases and amino acids in neutral aqueous solutions by CZE with electrokinetic injection [143]; to study the role of the counter-ions in the BGE in transient ITP-CZE [99]; to stack phenol from seawater samples using an improved dynamic pH junction for the determination of submicromolar levels of this analyte [144]; and to characterize transient ITP for the concentration of aniline and pyridine from sewage samples while having water as anolyte [145]. During the latter process, a zone comprising H + as systeminduced terminating compound is produced and analytes become stacked between this zone and the BGE. The simulation data presented in Fig. 5 were obtained with SIMUL5 and illustrate how micromolar concentrations of phenol in presence of 550 mM NaCl can be stacked and analyzed in a BGE comprising 160 mM boric acid and 126 mM NaOH (pH 9.8). A zone of sodium propionate injected after the seawater sample (Fig. 5C and D) provides improved sensitivity compared to the use of a conventional dynamic pH junction ( Fig. 5A and B) [144]. SIMUL5 served as a tool to assess the removal of sample background buffering ions and myoglobin enrichment via a pH junction created by discontinuous buffers [146], and to investigate the stacking process of analytes (peptides and proteins) in a discontinuous buffer system comprising a pH 9.75 ammonium buffer as catholyte and a pH 4.25 acetate buffer as anolyte [147]. Stacking is shown to occur in a sharp, cationically migrating boundary referred to as the neutralization reaction boundary. The simulated results closely resembled the experimental data, and together, they effectively revealed the characteristics of the discontinuous buffers. SIMUL5 was employed to optimize the ITP concentration conditions of nucleic acid fragments via analysis of various operating factors, including electric current, sample amount, and sample matrix [148]. Under the elucidated conditions, DNA focusing was studied in a commercial ITP instrument with preparative fraction collection. SIMUL5 was applied to investigate the effects of local conductivity differences between analyte plugs, reagent plugs, and the BGE on EMMA analyses, i.e., via analysis of the ionic boundaries that are formed upon power application [149]. Furthermore, GENTRANS that encompasses Taylor-Aris diffusivity was used in the diffusion mode to predict transverse diffusional reactant mixing occurring during hydrodynamic plug injection of configurations featuring four and seven plugs. The same simulator in the electrophoretic mode was applied to study electrophoretic reactant mixing caused by voltage application in absence of buffer flow. Simulations also visualized buffer changes that occur upon power application between incubation buffer and background electrolyte that have an influence on the reaction mixture [150]. SIMUL5 was employed to simulate analyte peak splitting of salt-containing samples that may occur in CZE with sample self-stacking [151]. In this comprehensive work, theoretical considerations based on velocity diagrams and computer simulations reveal that these effects originate in the transient phase of separation where electromigration dispersion profiles and sharp boundaries are formed and evolve. Multiple transient sharp boundaries (including system boundaries) may exist that are simultaneously capable of stacking an analyte resulting in permanent or transient multiple peaks. This is illustrated with the simulation data presented in Fig. 6. A 50 mm long sample with 0.01 mM benzoic acid in 59.80 mM HCl and 59.88 mM NaOH (pH 9.80) was applied into a BGE comprising 50 mM boric acid and 32 mM NaOH (pH 9.48). Upon power application, three transient, sharp migrating anionic boundaries are formed and benzoic acid becomes stacked in two of these boundaries and therefore becomes split into two peaks. Eventually, two of these sharp boundaries disappear and the two benzoic acid peaks become destacked and migrate zone electrophoretically in the BGE behind the chloride zone (1138 s time point of Fig. 6). The computer predicted peak splitting of benzoic acid in this system could be validated experimentally. The process of multiple peak formation is complex, depends on the amount of sample, the composition of the sample, and the composition of the BGE. Thus, the detected analyte pattern may vary from sample to sample and may depend on detector location. The elucidation of the peak-splitting mechanism by simulation allows both the identification of its presence in a given BGE and sample, and to find ways to avoid it [151]. The fundamentals of electrokinetic injection of a weak acid across a short water plug into a phosphate buffer at low pH were studied by computer simulation with GEN-TRANS and validated experimentally [152,153]. The current during electrokinetic injection, the formation of the analyte The input data of all components are given in ref. [151]. The anode is to the right. From ref. [151]; copyright Wiley-VCH GmbH; reproduced with permission. zone, changes occurring within and around the water plug, and mass transport of all compounds in the electric field were investigated. The simulation provided insight into these changes, including the nature of the migrating boundaries and the stacking of the weak base. The data confirm the role of the water plug to prevent contamination of the sample by components of the background electrolyte and suggest that mixing caused by electrohydrodynamic instabilities increases the water plug conductivity. The sample conductivity must be controlled by the addition of an acid to create a buffering environment and to prevent the generation of reversed flow that removes the water plug [152]. Simulations of the behavior of the electrophoretic system after electrokinetic injection of cationic compounds across a short water plug revealed that a phosphoric acid zone at the plug-buffer interface is formed and becomes converted into a migrating phosphate buffer plug that corresponds to the cationically migrating system zone of the phosphate buffer system. Its mobility is higher than that of the analytes such that they migrate behind the system zone in a phosphate buffer comparable to the applied BGE [153]. The involvement of GENTRANS simulations encompassed also studies of electroosmotic flow-balanced isotachophoretic stacking with continuous electrokinetic injection for the concentration of anions in high conductivity samples [154], pressure-assisted electrokinetic supercharging for the enhancement of non-steroidal anti-inflammatory drugs [155], acid-induced transient isotachophoretic stacking of basic drugs in co-electroosmotic flow CZE [156], and stacking of monosaccharides after hydrolysis of a single wood fiber in an open microchannel [157]. In the 2010-2020 time period, SPRESSO was extensively used for the simulation of ITP systems. SPRESSO was extended with the Onsager-Fuoss model for actual ionic mobility and the extended Debye-Hückel theory for correction of ionic activity [66]. For model ITP and CZE separations performed at low and high ionic strength, this version of SPRESSO is reported to predict data that compare well with those monitored experimentally. The quasi 1D version for variable cross-section channels was applied to simulate focusing and separation of two analytes in plateau mode ITP and peak mode ITP prior, during, and after traveling through a converging channel with a fivefold cross-sectional area reduction [67]. This is shown with the cationic ITP simulation data presented in Fig. 7. Pyridine and aniline are separated between a leading electrolyte composed of 18 mM NaOH and 20 mM acetic acid and an anolyte comprising 40 mM β-alanine and 20 mM acetic acid. Figure 7A depicts simulated patterns with sample amounts that are sufficient to form plateau zones. Initially, the two analytes are injected as overlapping Gaussian peaks at the interface of leading and terminating electrolytes. Upon application of a high electric field strength (1207.7 A/m 2 ), the two analytes separate quickly in the large cross-section region and form zones with a plateau concentration (t = 27 s). Pyridine has a higher mobility and forms its zone in front of aniline. As the analyte zones migrate from the large to the small cross-section region (t = 46 s), their zone lengths increase inversely with a decrease in cross-sectional area. When analyte zones fully migrate into the narrow cross-section region (t = 53 s), their zone lengths reach new steady-state values that are five times longer compared to those in the large cross-section region. It is important to note that the plateau concentrations of the analytes do not depend on channel cross section. They are given by the area-independent Kohlrausch adjustment. The thickness of the steady-state migrating boundaries, however, becomes smaller with the increase of current density from 1207.7 to 6038.6 A/m 2 . This is, however, not visible in the graphs with the used x-axis scale. Figure 7B depicts isotachophoretic focusing of the same two analytes in the peak mode and along the same converging channel. Compared to the case of Fig. 7A, the amounts of analytes are 20-fold smaller and the applied current 10-fold lower. Data for three locations are presented. Due to their Figure 7. Simulation data obtained with the SPRESSO quasi-1D model that accounts for the dispersive effect of a nonuniform electric field in channels with a variable cross-sectional area. (A) Plateau mode cationic ITP and (B) peak mode cationic ITP separation of pyridine (S1) and aniline (S2) between sodium (LE) and β-alanine (TE) while traveling through a converging channel with fivefold cross-sectional area reduction. The leading electrolyte was composed of 18 mM NaOH and 20 mM acetic acid, and the terminating electrolyte comprised 40 mM β-alanine and 20 mM acetic acid. Simulations used a 60 mm long computational domain with 450 grid points and a constant applied current of (A) 1 μA and (B) 0.1 μA. The amounts of sampled analytes for the peak-mode case were 20-fold lower compared to that of the plateau mode. The cathode is to the right. From ref. [67]; copyright Wiley-VCH GmbH; reproduced with permission. low initial concentrations, the two analytes focus in the peak mode in the large cross-section region (t = 241 s). Analyte peaks largely overlap and migrate essentially within the boundary formed by the leading and the terminating ions. As the analyte zones migrate from the large to the small crosssection region the analyte peak concentrations increase due to increased separation and zone boundaries become sharper that is associated with the higher local electric field strength (t = 452 s). In the small cross-section region, their zone concentrations nearly reach the plateau values (t = 517 s). During the transition from the large to the small cross-section regions, the electric field strength changes from 120.8 to 603.9 A/m 2 , and a higher detection sensitivity is reached. The quasi-1D approach of SPRESSO is computationally efficient and highly suited to simulate systems comprising very sharp boundaries at high electric field strengths and in channels with a variable cross-sectional area [67]. The coupling of transient ITP and CZE for increasing sensitivity and resolution was comprehensively reviewed by Bahga and Santiago [158]. The methods are compared [168]. The anode is to the right. From ref. [168]; copyright Wiley-VCH GmbH; reproduced with permission. and discussed with simulation examples generated with SPRESSO. The same simulator was employed for validation of analytical solutions derived for analyte distributions in peak mode ITP [159], the plateau concentrations predicted by a diffusion free ITP model for channels with axially varying cross-section [78], and indirect detection in ITP assays on a miniaturized system with LIF detection [160]. In other efforts, SPRESSO was used to simulate focusing and separation of ionic species using bidirectional ITP [161]. In the described process, anionic sample species are focused isotachophoretically prior to the interaction with a cationic ITP shock that changes the local pH and other properties such that analytes become defocused and begin to separate under zone electrophoretic conditions. This principle was applied to the high-resolution separation of a 1 kbp DNA ladder. Bidirectional ITP was also applied to the formation of a sudden decrease in the concentration of the leading electrolyte ahead of the focused anions and thus a corresponding decrease in analyte zone concentrations and increase in the length of the analyte zone with plateau concentration. The latter aspect comes along with an increased detection sensitivity as the zone length is proportional to the amount of analyte. Simulations were used to describe the species transport of the involved processes in such a system with a concentration cascade of leading electrolytes [162]. Gebauer and co-workers reported a number of interesting contributions in which SIMUL5 was used to verify the behavior of electrolyte systems that are compatible for highly sensitive analysis of cations [163][164][165] and anions [166][167][168][169] by capillary ITP and moving boundary systems coupled to ESI-MS detection. They represent innovative contributions in which the properties of the stacking ITP boundary can be tuned by the composition of both the leading and terminating zone, as well as the proper selection of a suitable spacing component. These approaches permit the monitoring of subppb analyte concentrations in the peak mode stacking format in real-world samples, such as in drinking and river water. Simulation data of three moving boundary systems comprising ITP stacking areas at the front and rear boundaries of a spacer are presented in Fig. 8. These examples demonstrate how the spacer technique in moving-boundary ITP allows playing with selectivity [168]. A mixture of three model analytes with pKa values in the medium acidic region, benzoic acid, salicylic acid, and p-hydroxybenzoic acid, together with a spacer served as analytes. All three examples represent moving boundary ITP systems in which one or both electrolytes contained also the other component. Formic acid and propionic acid were anionic co-ionic constituents and NH 4 + served as a counter component. Data of a system with the leading zone containing a mixture of formic acid and propionic acid, the terminating zone propionic acid only, and lactic acid as a spacer, are depicted in Fig. 8A. In this configuration, benzoic acid becomes stacked at the rear boundary of the spacer zone, salicylic acid at the front boundary of the spacer zone, whereas p-hydroxybenzoic acid does not form an ITP zone and migrates as a broad zone in the terminating electrolyte. Figure 8B shows the simulation result for another movingboundary ITP system where both electrolytes comprise mixtures of formic acid and propionic acid, and acetic acid is used as a spacer. In this system, p-hydroxybenzoic acid and benzoic acid are focused in the rear and front boundaries of the spacer zone, respectively, and salicylic acid is not stacked and migrates zone electrophoretically within the leading electrolyte. In the case where the leading zone contained Figure 9. Focusing of an anionic analyte within an inverse electromigration dispersion profile. Schematic representation of pH, conductivity κ, velocity v, and analyte concentration c A along the migration coordinate for an anionic electromigration dispersion profile after time t evolving from a sharp initial boundary between the leading zone L and the terminating zone T. The anionic analyte focuses on the profile at point x A0 where its electrophoretic velocity v A is equal to v j of the gradient. EMD refers to electromigration dispersion. The anode is to the right. From ref. [173]; copyright Wiley-VCH GmbH; reproduced with permission. formic acid only, the terminating zone both anions, and lactic acid served as a spacer (Fig. 8C), the simulation predicts the stacking of salicylic acid in the front boundary and the two other analytes in the rear boundary. These simulations could be validated experimentally by CE-MS [168]. They illustrate that proper tuning of the electrolytes and the selection of the spacer compounds are crucial for ITP stacking of a target analyte. Such systems enhance the application range of ITP-MS. A new capillary electrophoretic separation and focusing principle in which weak nonamphoteric ionogenic species are focused and transported into or through the detector was explored by Gebauer and co-workers [170][171][172][173]. In that work, SIMUL5 was used to simulate the entire process and validate the theoretical prediction based on calculated velocity diagrams. The prerequisite condition for the application of this principle is the existence of an inverse electromigration dispersion (EMD) profile, i.e., a profile in which pH is decreasing toward the anode or cathode for focusing of anionic or cationic weak analytes, respectively. The conditions under which a weak acid is focused on a profile of this type are depicted schematically in Fig. 9. It shows the distributions of pH, conductivity κ, and velocity v along the separation column at time t after the application of current. The starting configuration is assumed to be a sharp boundary between the leading solution (zone L) and the trailing solution (zone T). The compositions of both zones are chosen such that they are equal to the compositions of the front and rear edges of the EMD profile that evolves between them after the application of current. Under a constant current density, the front and rear edges of the profile move with the constant velocities v L and v T , respectively, where v L > v T . All points along the gradient have velocities in between those values. In Fig. 9, this The anode is to the right. From ref. [171]; copyright Elsevier; reproduced with permission. is represented by the v j line that illustrates the velocity profile for the depicted time point. The line marked v A shows the dependence of the migration velocity for a weak acid A that becomes focused at the location of the intersection of the two velocity graphs. The orientation of the pH gradient with higher pH at the trailing edge and lower pH at the fronting edge is the major prerequisite for the focusing of the analyte. The EMD zone with the focused analyte is migrating and dispersing such that all property profiles are flattening with time. Unlike in ITP, no migrating steady-state distributions are produced. The peak height of the focused analyte zone decreases as it continues to migrate toward a detector. It can be assumed that its concentration profile at any time is the result of the balance between the flux of electromigration and diffusion. The simulation data presented in Fig. 10 depict the concentration profiles of 12 weak acids after 10 s of power application that were applied in small quantities between a leading electrolyte composed of 15.2 mM maleic acid and 21.7 mM 2,6-lutidine (pH 5.81) whereas the trailing electrolyte contained 2 mM maleic acid and 27.6 mM 2,6-lutidine (pH 7.53) [171]. The data illustrate that (i) weak acids with pKa values between 9 and 5.5 can be quickly focused in the produced EMD profile, (ii) the focusing power becomes smaller with decreasing pKa value, and (iii) migration proceeded toward the anode on the right. Furthermore, the effect of carbonate on the focusing properties was simulated in the same way and shown to have an appreciable impact for weak acids with pKa values < 6.5 [171]. This configuration was found to be robust and was successfully applied to the analysis of nanograms per milliliter concentrations of sulfonamides in waters using CE-MS [171]. The fundamental resolution equation and pressure-assisted performance enhancement were reported in ref. [172] and a complete theory of analyte focusing on an inverse EMD profile is given in ref. [173]. It is important to note that this separation principle is based upon a mechanism that is different from both CZE and IEF where separation is based on the difference in mobilities and pI values, respectively. Isoelectric focusing Although IEF simulations can be performed with many different simulators [27,41,60,71,72,74,[85][86][87][88]100,117,118], GENTRANS was by far the most used solver to predict pH gradient formation, stabilization, destabilization, and migration, as well as focusing and mobilization of analytes in IEF [16][17][18][19][20][21][22]. The value of using dynamic simulation for the investigation of pH gradient drifts and instabilities is the subject of a recent comprehensive review with a large number of simulation examples produced by GENTRANS [22]. It discusses the electrokinetic processes that lead to pH gradient instabilities in carrier ampholyte-based IEF. In addition to electroosmosis, there are four of electrophoretic nature, namely (i) the stabilizing phase with the plateau phenomenon, (ii) the gradual isotachophoretic loss of carrier ampholytes at the two column ends in presence of electrode solutions, (iii) the inequality of the mobilities of positively and negatively charged species of ampholytes (the anionic form of an ampholyte is more hydrated than the cationic form and thus possesses slightly smaller mobility [74,174]), and (iv) the continuous penetration of carbonate from the catholyte into the focusing column. The impact of these factors on cathodic and anodic drifts was analyzed by simulation of carrier ampholyte-based focusing in closed and open columns. Focusing under realistic conditions within a 5 cm long capillary in which three amphoteric low molecular mass dyes were focused in a pH 3-10 gradient formed by 140 carrier ampholytes was investigated and compared to experimental results. In open columns, electroosmosis displaces the entire gradient toward the cathode or anode whereas the electrophoretic processes act bidirectionally with a transition around pH 4 (drifts for pI > 4 and pI < 4 typically toward the cathode and anode, respectively). The data illustrate that focused zones of carrier ampholytes have an electrophoretic flux and that dynamic simulation can be effectively used to assess the magnitude of each of the electrokinetic destabilizing factors and the resulting drift for a combination of these effects. The resulting electrokinetic transport indicates that a true steady-state is never attained in carrier ampholyte-based IEF, that is, in IEF without an immobilized pH gradient [22]. In other efforts, GENTRANS was used to study three additional aspects of IEF. The first one comprises various sampling strategies for capillary IEF with concomitant electroosmotic zone mobilization [175]. The separation and focusing of analytes in a pH 3-11 gradient formed by 101 biprotic carrier ampholytes was investigated with the application of the analytes (i) mixed with the carrier ampholytes (as is customarily done), (ii) as a short zone within the initial carrier ampholyte zone, (iii) sandwiched between zones of carrier ampholytes, and (iv) introduced before or after the initial carrier ampholyte zone. This is illustrated with the data presented in Figs. 11 and 12 that were obtained with the ionic strength-dependent EOF model for a fused-silica capillary and focusing between 10 mM phosphoric acid as anolyte and 20 mM NaOH as catholyte. Analyte separation dynamics for sampling in presence of carrier ampholytes are depicted in Fig. 11 and those for the sample being applied before, after, and between carrier ampholytes are presented in Fig. 12. It is important to note that current density and EOF toward the cathode are not constant in these situations [175]. With sampling as a short zone within or adjacent to the carrier ampholytes, separation and focusing of analytes proceed as a cationic, anionic, or mixed process, and separation of the analytes is predicted to be much faster than the separation of the carrier components. Thus, after the initial separation, analytes continue to separate and eventually reach their focusing locations. This is different from the double-peak approach to equilibrium that takes place when analytes and carrier ampholytes are applied as a homogenous mixture (Fig. 11A). Simulation data reveal that sample application between two zones of carrier ampholytes results in the formation of a pH gradient disturbance. As a consequence, the properties of this region are sample matrix dependent, the pH gradient is flatter, and the region is likely to represent a conductance gap (hot spot). Simulation data suggest that samples placed at the anodic side or at the anodic end of the initial carrier ampholyte zone are the favorable configurations for capillary IEF focusing with electroosmotic zone mobilization [175]. In a second project, simulations with the sample being applied between zones of carrier ampholytes or on the anodic side of the carrier ampholytes were used to characterize the behavior of sample components with pI values outside a pH 6-8 gradient formed by 101 hypothetical biprotic carrier ampholytes [176]. Application of power leads to a situation in which the pH gradient is bracketed by two isotachophoretic zone structures comprising selected sample and carrier components as isotachophoretic zones. The isotachophoretic structures electrophoretically migrate in opposite direction. When electroosmosis or an imposed flow is present, the overall pattern is transported toward the capillary end for detection of the entire zone pattern. Sample components whose pI values are outside the established pH gradient are demonstrated to form isotachophoretic zones behind the leading cation of the catholyte (components with pI values larger than the upper edge of the pH gradient) and the lead- Figure 11. Dynamics of analytes in IEF sampled in presence of carrier ampholytes. Computer predicted analyte dynamics after 0, 0.2, 0.6, 1.0, and 1.4 min of power application for (A) analytes mixed with carrier components and placed between 3% and 23% of column length, (B) a 2% sample zone at the anodic end of the carrier ampholyte zone, (C) a sample zone between 9% and 11% of column length, and (D) a 2% sample zone at the cathodic end of the carrier ampholyte zone. GENTRANS with the ionic strength dependent EOF model for a fusedsilica capillary was used for simulation. Phosphoric acid (10 mM) and 20 mM NaOH served as anolyte and catholyte, respectively. A 10 cm focusing space divided into 4000 segments of equal length and a constant voltage of 1000 V were employed. A total of 101 hypothetical biprotic carrier ampholytes were used to establish a pH gradient between anode and cathode. Their pI values uniformly span the range 3.0-11.0 ( pI = 0.08). For each ampholyte, pK was 2.5, the ionic mobility was 2.5 × 10 −8 m 2 /Vs, and the initial concentration was 0.2 mM. Physicochemical input data for the seven amphoteric sample components and the other constituents are given in [175]. The data are presented as the sum of analyte concentrations. In the graphs at the bottom of each panel, the dotted lines demarcate the initial positions of the carrier ampholyte zones (sum of carrier component concentrations divided by 10). S and CA refer to sample and carrier ampholytes, respectively. Analyte peaks are labelled with their pI values. Data are depicted with y-scale offsets of 4 mM. The cathode is to the right. From ref. [175]; copyright Wiley-VCH GmbH; reproduced with permission. Figure 12. Dynamics of analytes in IEF sampled outside carrier ampholytes. Computer predicted analyte dynamics after 0, 0.2, 0.6, 1.0, and 1.4 min of power application for a sample zone (A) at the anodic side of the carrier ampholyte zone, (B) between two zones of carrier ampholytes, and (C) at the cathodic side of the initial carrier zone. Other conditions, data presentation and key are the same as for Fig. 11. The cathode is to the right. From ref. [175]; copyright Wiley-VCH GmbH; reproduced with permission. ing anion of the anolyte (components with pI values smaller than the lower edge of the pH gradient). Amphoteric compounds with appropriate pI values or nonamphoteric components can act as isotachophoretic spacer compounds between sample compounds or between the leader and the sample with the highest mobility [176]. The third project dealt with the modeling of the formation and prevention of a pure water zone in IEF with narrow pH range carrier ampholytes [177]. Characteristics of gradients covering two pH units that end or begin around neutrality were investigated. Data obtained revealed that a zone of water is formed in focusing with carrier ampholytes when the applied pH range does not cover the neutral region, ends at pH 7.00 or begins at pH 7.00. The presence of additional amphoteric components that cover the neutrality region prevent water zone formation under current flow. Furthermore, no water zone evolves when atmospheric carbon dioxide dissolved in the catholyte causes the migration of carbonic acid (in the form of carbonate and/or hydrogen carbonate ions) from the catholyte through the focusing structure [177]. Simulation with GENTRANS was also used to predict the dynamics of pH gradient formation and protein separation in simple buffers that were applied in a binary system micropreparative IEF approach featuring a 12 μL suspended drop between two palladium electrodes. During the electrophoretic process at low voltages (1.5-5 V), fluid was allowed to evaporate until splitting into two fractions [178]. Computer simulation of protein and peptide preconcentration in carrier-free systems and IEF in microchannels using simple ampholytes was investigated with GENTRANS [179]. In the configurations studied, the sample was uniformly distributed throughout the channel with driving electrodes used as column ends. Experimental results from carrier-free systems are compared to simulation results, and the effects of atmospheric carbon dioxide and impurities in the sample solution are examined via simulation. Simulation data provided insight into the dynamics of the transport of all components under the applied electric field and revealed the formation of a pure water zone in the channel center. IEF with simple well defined amino acids as carrier components was assessed for concentration and fractionation of peptides and component distributions in the channel were assessed using MALDI-MS and nano-ESI-MS [179]. Finally, important information about the focusing dynamics and location of the foci of Alzheimer's disease-related amyloid-beta peptides by IEF in 75 nL microchannels using simple, well-defined buffers were obtained via computer simulation [180]. A stepwise pH gradient was tailored for focusing the C-terminal peptides with a pI of 5.3 in the boundary formed between cycloserine and aspartyl-histidine. Detection was performed by direct sampling of a nanoliter volume containing the focused peptides from the microchannel, followed by deposition of this volume onto a chip with micropillar MALDI targets. In addition to purification, IEF preconcentration was found to provide at least a 10-fold increase of the MALDI-MS signal. The dynamic simulation was used for the validation of analytical approximations designed to predict steady-state protein distributions in IEF [87,181], to investigate pH gradient formation and cathodic drift in microchip IEF [100], to study the effects of electromigration and electroosmosis in IEF within a silica nanofluidic channel [182], and to gain qualitative insight into the behavior of different chemical mobilization schemes in microchannel IEF [183]. In the latter work, a non-released developer version of SIMUL5 which features the options to fix concentrations at specific boundaries and to mimic the electrode reservoirs with varying capillary cross-sections was employed. Focusing and mobilization as a one-step process [184] was not further explored in the 2010-2020 time period. Electrokinetic chromatography, chiral separations, and affinity electrophoresis In this section, simulations of electrophoretic systems comprising chemical reactions other than protolysis that appeared in the 2010-2020 time period are discussed. They include MEKC, complexation involved in chiral separations, affinity electrophoresis, and CE-based online reaction methods. A few reports dealing with dynamic MEKC simulations can be found in the literature. Work with the MEKC version of GENTRANS [55] provided insight into the mechanism of transient trapping in MEKC [56]. Transient trapping is a mechanism of online sample concentration and separation. It involves the injection of a short length of micellar solution in front of the sample, making it similar to sweeping in partial-filling MEKC. Simulations revealed that the mechanism for concentration in transient trapping is indeed similar to sweeping since the analytes interact and accumulate in the micelles that penetrate the sample zone. The mechanism for separation is, however, quite unique since the concentrated analytes are trapped for a few seconds on the sample/micelle boundary before they are released as the micelle concentration becomes reduced. This induces electromigration dispersion and the separation of the analytes down a micelle gradient. Compared to sweeping MEKC, transient trapping occurs faster (1/10 of the time) and within a shorter capillary length (1/4 of the capillary length), which results in two to three times increase in sensitivity [56]. Furthermore, SIMUL5 that does not include SDS complexation reactions was employed to study boundary formation of buffer components, SDS (implemented as anion), and sample matrix components in MEKC systems in which anionic micelles are focused using sample induced transient ITP [185], sweeping under conditions with an inhomogeneous electric field and low surfactant concentration [186], and systems in which stacking induces migration time shifts of analytes [187]. The incorporation of complexation equilibria with monovalent components into GENTRANS [58] and with monovalent and polyvalent components into SIMUL5 [61,62] provided the possibility of studying the impact of 1:1 chemical equilibria between solutes and a buffer additive with fast interactions such that they can be considered instantaneous in comparison to the time scale of peak movement. Complexation constants and specific mobilities of the formed analyteselector complexes are required as additional inputs. Both of the 1D dynamic simulators provide equal results when used with identical inputs. They allow the prediction of the dynamics of analyte migration and separation, the elucidation of the origin and dynamics of system peaks, and the interference of analyte and system peak migration [24,58]. SIMUL5complex was used with uncharged and charged selectors [62], to investigate electromigration dispersion effects caused by complexation [36,37,62,63], to validate the occurrence and shape of analyte and system peaks in situations with complex-forming equilibria predicted by a generalized model of the linear theory of electromigration [24,38,188], and to study the impact of complex mobilities, complexation constants, pH, and selector concentration on migration order and separation of drug enantiomers [63] and profens [64]. Simulations performed with SIMUL5complex revealed the existence of unexpected and previously unexplained electromigration dispersion effects that are caused by the complexation process itself. This dispersion may occur with interactions between a charged analyte and a neutral selector for the case of a low concentration of the chiral selector and a high complexation constant [37,62,63]. Three examples are given in Fig. 13. The presented data depict the migration of R-flurbiprofen in presence of β-CD ( Fig. 13A; complexation constant of 4037 L/mol), heptakis(2,6-di-O-methyl)-β-CD ( Fig. 13B; 4800 L/mol), and heptakis(2,3,6-tri-O-methyl)-β-CD (Fig. 13C, 552 L/mol) in a pH 8.13 buffer composed of 50 mM Tris and 50 mM tricine. The concentration of the selectors was varied between 0 and 100 mM. Excellent agreement between simulation (lower graphs) and experimental (upper graphs) data was obtained [37]. SIMUL5complex was also employed to elucidate the impact of the complexation of buffer constituents with neutral agents on common buffer properties [189] and analyte and system peaks [190,191], to provide insight into the sweeping process of a drug in presence of a neutral CD [191], to study affinity capillary electrophoresis and vacancy affinity capillary electrophoresis methods that are used for the determination of complexation constants for two cases, a fully charged analyte with a neutral selector and an un-charged analyte with a charged selector [192], and to validate the applicability of a new theoretical formula derived from partial-filling affinity capillary electrophoresis for the determination of apparent stability constants of analyte-ligand complexes [193]. Systems comprising sulfated β-CD, a multiply negatively charged selector, were simulated with SIMUL5. The executed projects included the assessment of the fundamentals of field-amplified electrokinetic injection of weak bases for enantioselective CE [194], the characterization of isomer mixtures of this selector [65], the investigation of the enantioselective separation of weak bases in an online microanalysis configuration comprising this selector [195], and the exploration of inverse cationic ITP for separation of methadone enantiomers with this chiral selector [196]. Good agreement between simulation and experimental data was obtained in all these studies. For a weak base, simulation properly predicted the inversion of analyte migration direction in presence of a multiply negatively charged selector when the selector concentration is increased and the selector concentration interval at which the enantiomers of a weak base migrate in opposite directions [24,65]. GENTRANS was applied to characterize the migration and separation of enantiomers in systems with neutral CDs as selectors and at relevant power levels that are used in CE experiments [58]. The separation of the enantiomers of two weak bases, methadone and 2-ethylidene-1,5-dimethyl-3,3diphenylpyrrolidine, and codeine (achiral compound with- and codeine (33.4 μM) in 10-fold diluted BGE without additive. The sample initially occupied 1% of column length and was placed between 3 and 4% of column length. Simulations were performed with GENTRANS using a 4 μm mesh, a constant current density of 32.37 kA/m 2 , and a constant EOF of 160 μm/s as is described in ref. [58]. The upper graphs, depicted with a y-axis offset, are corresponding data obtained in absence of the chiral selector. For simulation without selector, the EOF was assumed to be 180 μm/s. MET, EDDP, and COD refer to methadone, 2-ethylidene-1,5-dimethyl-3,3-diphenylpyrrolidine, and codeine, respectively. From ref. [58]; copyright Wiley-VCH GmbH; reproduced with permission. out complexation) in presence of heptakis(2,6-di-O-methyl)β-CD as a selector and without selector are depicted as lower and upper graphs, respectively, in Fig. 14. Complexation constants of the enantiomers (between 344 and 474 L/mol) and all other input data are given in ref. [58]. The simulation was run at a constant current density of 32.37 kA/m 2 that corresponds to a current of 63.6 μA in a 50 μm id capillary. Predicted concentration data and those adjusted for differences in absorbance at a detection wavelength of 195 nm are presented in Fig. 14A and B, respectively. Simulation data are in agreement with those monitored experimentally as is illustrated with the data of Fig. 14C. GENTRANS was also used to assess the migration order change of ketoconazole enantiomers at low pH in presence of increasing amounts of (2-hydroxypropyl)-β-CD [24,197], and to investigate electrophoretic aspects of enantiomer migration and separation of methadone in capillary CEC at low pH with an immobilized neutral selector [59]. The latter work was accomplished by using zero mobilities for the formed complexes and otherwise the same input parameters as used for free solution simulations. This mimics conditions in absence of unspecific interactions between analytes and the chiral stationary phase. Simulation data revealed that separations are quicker, electrophoretic displacement rates are reduced, and sensitivity is enhanced in CEC with on-column detection in comparison to free solution conditions. In the same work, simulation was used to study electrophoretic analyte behavior at the in-terface between the sample and the CEC column with the immobilized chiral selector (analyte stacking) and at the rear end when analytes leave the environment with complexation (analyte destacking). Furthermore, simulations provided insight into an approach to counteract analyte dilution at the column end via use of a BGE with higher conductivity, and the impact of EOF on analyte migration, separation, and detection for configurations with the selector zone being displaced or remaining immobilized under buffer flow [59]. GENTRANS was employed to characterize isotachophoretic enantiomer separation and zone stability of weak bases in presence of a neutral CD as chiral selector [58,198,199]. The systems studied comprised acidic cationic electrolyte systems with sodium and H 3 0 + as leading and terminating components, respectively, and acetic acid as a counter component. One contribution focused on the investigation of zone formation and stability in free solution with methadone enantiomers as analytes [198]. The simulation data presented in Fig. 15 depict the cationic separation of methadone enantiomers with (2-hydroxypropyl)-β-CD complexation in a system comprising a pH 4.60 leader composed of 10.0 mM NaOH, 24.6 mM acetic acid, and 5 mM (2-hydroxypropyl)-β-CD. Ten millimolar acetic acid was used as an anolyte. Computer-predicted methadone, acetic acid, sodium (dashed line), and (2-hydroxypropyl)-β-CD (dotted line) profiles along the 10 cm column after 0, 5.0, and 10.0 min are presented in Fig. 15A. Concentrations Simulations were performed in a 10 cm column divided into 20 000 segments (5 μm mesh) with the sample being placed between 5 and 6% of column length, a constant current density of 250 A/m 2 and without any EOF. The cathode is to the right. S, R, M, CD, HAc, L, and T refer to S-methadone, R-methadone, mixed analyte zone, (2hydroxypropyl)-β-CD, acetic acid, leader, and adjusted terminator, respectively. From ref. [198]; copyright Wiley-VCH GmbH; reproduced with permission. (lower graphs) and pH/conductivity (upper graphs) of the migrating zone structures after 2.5, 5.0, 7.5, and 10.0 min of power application are presented in Fig. 15B, C, D, and E, respectively. These data depict the separation dynamics of the methadone enantiomers via the formation of a mixed zone which becomes shorter as a function of time and vanishes at the completion of the separation. They also reveal that all zones within the migrating isotachophoretic zone structure have higher concentrations of the selector compared to that applied in the leader, a deviation that is caused by the migration of the charged complexes. Another study employed norpseudoephedrine stereoisomers as analytes and dealt with zone formation, enantiomer separation, and migration for cases with the neutral selector added to the leader, immobilized to the capillary wall or support, or partially present in the separation column [199]. For the cases of a free and an immobilized selector, the conducted study focused on the electrophoretic transport of the analytes from the sampling compartment into the separation medium with the selector, the formation of a transient mixed zone, the separation dynamics of the stereoisomers, and the zone changes occurring during the transition from the chiral environment into a selector free leader. Dynamic computer simulation also provided a mean to investigate the dependence of the leader pH, the ionic mobility of the weak base, the mobility of the complex, the complexation constant, and selector immobilization on steady-state plateau zone properties in a hitherto unexplored way. In the 2010-2020 time period, other software packages were used to characterize interactive systems under CE conditions. One is SimulChir that represents a special, unreleased version of SIMUL that includes the full dynamics of interconverting enantiomers [200]. This software is based upon solving a complete set of continuity equations for all constituents of the separation system together with complexation and acid-base equilibria. It was used to simulate the dynamics of the interconversion of enantiomers in chiral separation systems and to determine the rate constants of interconversion of oxazepam enantiomers separated with carboxymethyl-β-CD as chiral selector. Furthermore, a simplified version of SimulChir was developed for use with multiple chiral selectors in which the electrophoretic velocity of each enantiomer is regarded constant. This solver was applied to study the interconversion of lorazepam enantiomers in presence of highly sulfated β-CD that comprises multiple charged chiral selectors [201]. The two SimulChir versions were applied to assess accuracy and sensitivity of the determination of rate constants of interconversion in achiral and chiral environments featuring a single, well-defined chiral selector and a mixture of selectors [202]. An overview of simulation models that describe interconversion is given by Trapp [203], a survey that also includes the direct calculation method based on approximation functions and a unified equation. DCXplorer is a software tool for the determination of interconversion barriers that utilizes the analytical solution of the unified equation and can be applied to dynamic chromatography and electrophoresis [204]. This software was recently used to investigate the enantiomerization barriers of the phthalimidone derivatives EM12 and lenalidomide by dynamic EKC [205]. Finally, the affinity electrophoresis model of Fang and Chen [206][207][208] that describes affinity interactions in CE under simplified electromigration conditions (assumption of constant electric field strength throughout the column) was employed to study the migration of interacting drug enantiomers in CE [209] and to test a mobility-based correction method for accurate determination of binding constants by CE frontal analysis [210]. Furthermore, a simulator that predicts the characteristics of a moving chelation boundary was developed and applied to the sweeping of metal ions, such as Cu 2+ , with ethylenediaminetetraacetic acid as a complexing agent [211]. Concluding remarks Dynamic simulators are able to predict the movement of ions in solution under the influence of an electric field and are as such the most versatile tools to explore the fundamentals of electrokinetic separations. The state of dynamic computer simulation software and its use has progressed significantly over the 2010-2020 decade. In the considered time period, the three most used simulators and the first 2D model were extended and new one-and multi-dimensional models were developed (Tables 1 and 2). Efforts were geared towards the design of computation schemes that enable faster simulations (e.g., SIMUL6 and SPYCE), simpler access to multidimensional simulations, and the availability of new features, including the incorporation of complexation for simulation of chiral separations and the deviation from electroneutrality for simulation of electrophoresis in nanochannels. While new solvers were mainly applied to benchmark simulations, GENTRANS, SIMUL5, and SPRESSO were applied to a large number of relevant investigations that provided insights into the behavior of analytes and buffer systems in moving boundary electrophoresis, CZE, CGE, ITP, IEF, EKC, ACE, and CEC. Simulations led to the exploration of basic phenomena in CZE (most notably the occurrence and use of system peaks, impact of flow with a parabolic flow profile, and analyte stacking and destacking), in ITP (the behavior of analytes in plateau and peak mode ITP before, during and after traveling through a converging channel, and the effect of counterflow on zone shape), and in IEF (pH gradient drifts and instabilities, sampling strategies, and occurrence of water zones). Other topics include the characterization and optimization of new separation and focusing systems (focusing of weak electrolytes in an inverse EMD profile and in specifically tuned moving boundary systems comprising ITP stacking areas for CE-MS analysis of trace amounts of selected analytes), the discovery of analyte dispersion based on complexation, the dynamics of chiral separations in CE, ITP, and CEC with neutral and charged selectors, and the detailed description of EMMA and online systems, including reagent mixing, product separation, and formation of the complex system of moving boundaries that evolve upon current application. With the availability of user-friendly solvers, dynamic computer simulations can be employed by almost anyone with a basic knowledge of electrophoresis. Furthermore, based on the recent developments and achievements, it is anticipated that this area will continue to grow over the next years, to lead to new discoveries and to provide important insights that will allow the optimization of electrophoretic systems on any scale. In addition, dynamic simulation can be used as attractive tool for educational purposes. Open Access Funding provided by Universitat Bern. The authors have declared no conflict of interest. Data availability statement The data that support the findings of this study are available from the corresponding author upon reasonable request.
2021-07-22T06:18:01.602Z
2021-07-21T00:00:00.000
{ "year": 2021, "sha1": "4892488e40b2a1aeb80a4040967eecf00d5aaf11", "oa_license": "CCBYNCND", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/elps.202100191", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "a87d76506559c80da3e6d1ac60607b1274958e1d", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
73662934
pes2o/s2orc
v3-fos-license
A Mesoscopic Approach to the ``Negative'' Viscosity Effect in Ferrofluids We present a mesoscopic approach to analyze the dynamics of a single magnetic dipole under the influence of an oscillating magnetic field, based on the formulation of a Fokker-Planck equation. The dissipated power and the viscosity of a suspension of such magnetic dipoles are calculated from non-equilibrium thermodynamics of magnetized systems. By means of this method we have found a non-monotonous behaviour of the viscosity as a function of the frequency of the field which has been referred to as the ``negative'' viscosity effect. Moreover, we have shown that the viscosity depends on the vorticity field thus exhibiting non-Newtonian behaviour. Our analysis is complemented with numerical simulations which reproduce the behaviour of the viscosity we have found and extend the scope of our analytical approach to higher values of the magnetic field. I. INTRODUCTION It is well-known that the dynamics of a suspension of dipolar particles is strongly influenced by the presence of an external field. Concerning the viscosity of the suspension, its behaviour as a function of the magnetic field is monotonous in the case of a constant field, reaching a saturation limit where the magnetic moments are oriented along the field. In contrast, when the field oscillates that behaviour is considerably modified to the extent that the contribution of the oscillating field to the effective viscosity of the suspension may become negative for frequencies of the field larger than the local vorticity. This phenomenon observed experimentally in [1] and reported in refs. [2] and [3] has been referred to as the "negative" viscosity effect in ferrofluids. It reveals the presence of two regimes, one essentially dissipative in which the variation of the viscosity is positive and other in which the energy of the oscillating field is practically transformed into kinetic energy of the particle. It is in this last regime where the variations of the viscosity become negative. Our purpose in this paper is to present an explanation of that effect based on a Fokker-Planck dynamics describing the time evolution of the probability density for the orientation of the dipolar particle. The starting point is the formulation of that equation and its perturbative solution. In this way we compute the different components of the susceptibility. The dissipated power and the viscosity follow from an analysis based on non-equilibrium thermodynamics. The general methodology we introduce accounts for the results of ref. [3], valid in the limit of small field, and agree with the experimental data of ref. [1]. Our theoretical results are also compared with numerical simulations we have performed, indicating that the qualitative behavior of the system is satisfactorily reproduced even for a moderate intensity of the oscillating magnetic field. The paper is organized as follows. Using linear response theory, we compute in Section II the generalized susceptibility associated with the orientation vector of the magnetic dipoles in the ferrofluid, when this is under the influence of an oscillating magnetic field. Section III is devoted to obtain the contributions of the oscillating field to the dissipation of energy and to the effective viscosity. In Section IV we report results on the viscosity obtained from numerical simulations, whereas in Section V we present our main conclusions. II. RESPONSE OF A MAGNETIC DIPOLE TO AN OSCILLATING FIELD We consider a dilute colloidal suspension of ferromagnetic dipolar spherical particles [5], with magnetic moment, m = m sˆ R, whereˆ R is an unit vector accounting for the orientation of the dipole. Each dipole is under the influence of a vortex flow with vorticity Ω = 2ω 0ˆ z, withˆ z being the unit vector along the z-axis, and of an oscillating field H = He −iωtˆ x, withˆ x being the unit vector along the x-axis. For t ≫ τ r , the motion of the particle is overdamped with τ r = I/ξ r being the inertial time scale. Here I is the moment of inertia, ξ r = 8πη 0 a 3 is the rotational friction coefficient, with η 0 the solvent viscosity, and a the radius of the particle. This time scale defines a cut-off frequency ω r = τ −1 r , such that the condition for overdamped motion is equivalent to ω ≪ ω r . In this case, the balance of the magnetic and hydrodynamic torques acting on each particle together with the rigid rotor evolution equation Here λ(t) ≡ (m s H/ξ r )e −iωt , with Ω p being the angular velocity of the particle. With allowance for Brownian motion, the stochastic dynamics corresponding to eq. (3) is given by the Fokker-Planck equation, involving the probability density Ψ(ˆ R, t) where L 0 and L 1 are operators defined by with D r = k B T /ξ r being the rotational diffusion coefficient, and R =ˆ R × ∂/∂ˆ R the rotational operator. Notice that the first and second terms on the right hand side of eq. (5) 1 , correspond to convective and diffusive term, respectively. Moreover eq. (4) which, according to eq. (3), rules the Brownian dynamics in the case of overdamped motion, is valid in the diffusion regime. This regime is also characterized by the condition t ≫ τ r , or equivalently ω ≪ ω r , which implicitly involves the white noise assumption. To solve the Fokker-Planck equation (4) we will assume that λ 0 ≡ |λ(t)| constitutes a small parameter such that this equation can be solved perturbatively. Thus up to first order in λ, the solution of the Fokker-Planck equation (4) is is the zero order solution at time t ′ , and withˆ R 0 being an arbitrary initial orientation. As follows from eq. (5) 1 , the unperturbed operator L 0 is composed of the operators R z and R 2 , which are proportional to the orbital angular momentum operators of quantum mechanics L z and L 2 , respectively, and, therefore, their eigenfunctions are the spherical harmonics [6] Given that we know how R acts on the spherical harmonics, it is convenient to expand the initial condition in series of these functions, since the spherical harmonics constitute a complete set of functions which are a basis in the Hilbert space of the integrable functions over the unit sphere [7] Using this expansion in eq. (6), for the first order correction to the probability density, ∆Ψ ≡ Ψ − Ψ 0 , we obtain Notice that the integral of ∆Ψ(ˆ R, t) over the entire solid angle is zero, in agreement with the fact that the unperturbed Since, we are interested in the asymptotic behavior we will set t 0 → −∞. In this limit, eq. (6) becomes where now and Ψ 0 (ˆ R, t) = 1 4π (13) is the uniform distribution function in the unit sphere. ¿From eq. (12) the contribution of the AC field to the mean value of the orientation vectorˆ R can be obtained aŝ This equation can be written in the more compact form where the response function [8] has been defined as for τ > 0. By causality, we can write t → ∞ in the upper limit of the integral in eq. (15); hence, this equation becomeŝ where χ i (ω) is the generalized susceptibility, which is the Fourier transform of χ i (τ ) From this equation we obtain the components of the susceptibility The quantities χ x , and χ y , have poles at ω = ±ω 0 ± 2D r i. The inverse of the imaginary part of these poles, (2D r ) −1 , defines the Brownian relaxation time. III. NON-EQUILIBRIUM THERMODYNAMICS OF THE RELAXATION PROCESS Our purpose in this section is to compute the energy dissipated during the relaxation process of the magnetization and the rotational viscosity of the suspension. The starting point is the entropy production corresponding to the relaxation of the magnetization, as given in ref. [9] Here M is the magnetization of the suspension and H the magnetic field. Moreover, H eq is the magnetic field related to the instantaneous value of M . The primes indicate that the corresponding quantities have been computed in the frame of reference rotating with the fluid. The entropy production can alternatively be written in terms of the corresponding quantities in the laboratory frame. Using the relation between the temporal derivatives of the magnetization in both frames from eq. (22) one obtains where now the different quantities refer to the laboratory system. Notice that in our case M = c m sˆ R, where c is the concentration of particles. Moreover, eq. (24) written in this way ensures the frame material invariance of the entropy production. The linear law inferred from this expression coincides with the relaxation equation postulated by Shliomis [3], provided we identify the phenomenological coefficient with the inverse of the Brownian relaxation time. The form of eq. (24) suggests the following decomposition The first contribution accounts for the entropy production coming from Debije relaxation, whereas the second is related to the viscous dissipation. To obtain these expressions use has been made of the relation M = κ H eq , with κ being the static susceptibility, and of the fact that M remains constant which implies (d M /dt) · H eq = 0, and ( Ω × M ) · H eq = 0. The power dissipated in a period of the field follows from the entropy production we have computed. For the different contributions one has For the Debije contribution we obtain where we have used eqs. (17) and (19), whith φ = (4π/3)a 3 c being the volume fraction of particles. Similarly, by using eqs. (17) and (20) the viscous part yields This last contribution introduces the rotational viscosity defined through the relation Combining eqs. (30) and (31) one infers the value of the rotational viscosity which represents the contribution of the rotational degrees of freedom of the dipoles to the effective viscosity of the suspension given by where η is given by Einstein law η = η 0 (1 + 5/2φ), [10]. In view of the former results, the total dissipation P (ω) = P D (ω) + P V (ω) is Notice that although the contributions to the dissipation P D (ω) and P V (ω) may achieve negative values, the total dissipation remains always positive, in accordance with the second law. The linear law derived from the entropy production (24) is valid for situations closed to equilibrium, when the distribution function is not too different from the equilibrium distribution. For larger deviations, this approach and equivalently the relaxation equation of Shliomis are no longer valid. One then should employ the Fokker Planck description we have introduced in section II. In this way, results for higher values of the field could also be obtained by means of the for malism we have developed. IV. NUMERICAL SIMULATIONS In order to check the validity of our results and explore the behaviour of the system for higher values of the oscillating field we have performed numerical simulations by using a standard second-order Runge-Kutta method for stochastic differential equations [13,14]. To this purpose we have considered the Langevin equation corresponding to eq. (4): where ξ(t) is Gaussian white noise with zero mean and correlation ξ i (t)ξ j (t + τ ) = 2D r δ ij δ(τ ). ¿From the previous equation one can easily compute the mean angular velocity of the particles by averaging over several realizations of the noise. This quantity then gives the rotational viscosity To obtain this expression we will first rewrite the entropy production in terms of P a , the axial vector related to the antisymmetric part of the pressure tensor from which one derives the phenomenological law The balance of torque densities, which is achieved in the asymptotic regime [10], together with eqs. (38), and (39) then leads to expression (36). In Figs. (1) and (2) we show the results obtained for the normalized rotational viscosity, η r /η 0 φλ 2 0 , as a function of the frequency of the applied field and as a function of the angular velocity of the fluid, respectively, for different intensities of the oscillating field. These figures clearly illustrate how for low intensities of the oscillating field the results obtained through numerical simulations can accurately be reproduced by linear response theory, as well as for higher values, linear response theory is still qualitatively correct. The crossover point from positive to negative values of the rotational viscosity seems not to depend significantly on the amplitude of the oscillating field. It is interesting to realize that, at high frequencies all the simulation curves match the linear response theory curve. Thus, in this frequency regime linear response theory provides an accurate description of the phenomenon. Fig. (2) also makes the non-Newtonian character of the fluid manifest. Notice that these effects are more pronounced for angular velocities of the fluid near the angular frequency of the applied field, i.e., for high and low values of the vorticity the rotational viscosity goes to zero and a constant negative value, respectively, whereas for intermediate values it depends on the particular value of the vorticity. From both figures, one concludes that the vorticity plays just the opposite role to the frequency of the field. V. CONCLUSIONS In this paper we have presented a mesoscopic approach to explain the "negative" viscosity effect ocurring in suspensions of magnetic particles. The dynamics of the magnetic moment in the oscillating field is described by means of a Fokker-Planck equation which can be solved perturbatively. This equation gives rise to a hierarchy of equations [12] for the different moments, describing the relaxation of the magnetic moment. The first equation of the hierarchy, when linearized in the field , agrees with the phenomenological equation obtained from non-equilibrium thermodynamics and the corresponding one postulated by Shliomis. Following this procedure, we have been able to compute the dissipated power and the viscosity which is a nonmonotonous function of the frequency of the field. Up to first order in the field our results agree with the corresponding ones of ref. [3] based on the phenomenological equation proposed by Shliomis. The phenomenological approach, dealing with the phenomenological law derived from the entropy production, is strictely valid at small fields and is no longer correct when larger deviations from the equilibrium distribution occur. If we are interested in the response of the system to larger values of the field the correct approach is the one based upon the Fokker-Planck equation we have proposed in section II. This approach involves the evolution equation for higher order moments of the probability distribution in the hierarchy. After introducing a decoupling approximation, the solution for the first moment can be used to compute the energy dissipation that allows one to define an effective viscosity. In order to go beyond the linear regime we have performed numerical simulations. This numerical analysis enables us to discern about the validity of the linear response theory treatment. Our conclusion is that qualitatively linear response theory may provide a reasonably explanation of the phenomenon.
2018-12-20T23:04:59.989Z
1998-03-25T00:00:00.000
{ "year": 1998, "sha1": "57b1f6246a91f28eee47b7a16616badbaef4313e", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/9803305", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "8b8dd1d0a3d6a09346e03f91f67b3b9565a94fec", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
248337925
pes2o/s2orc
v3-fos-license
Dental and Maxillofacial Cone Beam CT—High Number of Incidental Findings and Their Impact on Follow-Up and Therapy Management Cone beam computed tomography (CBCT) is increasingly used for dental and maxillofacial imaging. The occurrence of incidental findings has been reported, but clinical implications of these findings remain unclear. The study’s aim was to identify the frequency and clinical impact of incidental findings in CBCT. A total of 374 consecutive CBCT examinations of a 3 year period were retrospectively evaluated for the presence, kind, and clinical relevance of incidental findings. In a subgroup of 54 patients, therapeutic consequences of CBCT incidental findings were queried from the referring physicians. A total of 974 incidental findings were detected, involving 78.6% of all CBCT, hence 2.6 incidental findings per CBCT. Of these, 38.6% were classified to require treatment, with an additional 25.2% requiring follow-up. Incidental findings included dental pathologies in 55.3%, pathologies of the paranasal sinuses and airways in 29.2%, osseous pathologies in 14.9% of all CBCT, and findings in the soft tissue or TMJ in few cases. Clinically relevant dental incidental findings were detected significantly more frequently in CBCT for implant planning compared to other indications (60.7% vs. 43.2%, p < 0.01), and in CBCT with an FOV ≥ 100 mm compared to an FOV < 100 mm (54.7% vs. 40.0%, p < 0.01). Similar results were obtained for paranasal incidental findings. In a subgroup analysis, 29 of 54 patients showed incidental findings which were previously unknown, and the findings changed therapeutical management in 19 patients (35%). The results of our study highlighted the importance of a meticulous analysis of the entire FOV of CBCT for incidental findings, which showed clinical relevance in more than one in three patients. Due to a high number of clinically relevant incidental findings especially in CBCT for implant planning, an FOV of 100 × 100 mm covering both the mandible and the maxilla was concluded to be recommendable for this indication. Introduction Cone beam computed tomography (CBCT) was introduced for dental imaging by Piero Mozzo et al. in 1998 [1]. Rapidly, CBCT has found its way into clinical practice due to its superior spatial resolution for high-contrast structures such as bones and teeth [2]. Until recently, former standard techniques such as the orthopantogram (OPG) have been increasingly replaced by CBCT [3]. Besides the application in endodontics, periodontics, and orthodontics [4][5][6] CBCT has also demonstrated its importance in implant planning [7]. For this indication, a large field of view (FOV) is often necessary. In many cases, the entire mandible or the entire maxilla has to be depicted. Dief et al. conducted a systematic review of the literature regarding incidental findings in CBCT scans [8]. In seven out of ten reviewed articles, CBCT examinations were performed with a "large" FOV [9][10][11][12][13][14][15]. In three of these studies, the FOV was 130 × 130 mm 2 or larger [9,12,14]. This imposes requirements on the imaging and the diagnostic process with regard to: 1. radiation exposure of the patient, 2. evaluation of a possibly large and complex three-dimensional examination area, and 3. interpretation of pathologies beyond the professional focus of the clinical specialist, especially for incidental findings (IF) and their implications for further therapy. Lopes et al. [16] detected an average of 4.72 incidental findings in a total of 47 examinations that covered both the mandible and the maxilla. Nguyen et al. found an average of 1.85 IF per scan in 555 patients of an older population for pre-implant assessment [17]. In this context, the aim of this study was the systematic analysis of a large consecutive patient group with regard to the frequency and especially the clinical impact of incidental findings depending on the indication for CBCT imaging and the size of the FOV. Materials and Methods Approval for this study was obtained from the Institutional Review Board and informed consent was obtained from each patient. First, 374 consecutive CBCT studies that were performed in a period of 33 months in a tertiary hospital were retrospectively reviewed and evaluated for technical parameters and pathological incidental findings. Follow-up imaging was excluded to avoid distortion of the results by therapies that had taken place in the meantime. In case of multiple examinations in one patient, the oldest examination of this patient was analyzed. In a second step, 54 patients out of the 374 patients that were included in the study were reviewed to assess the clinical relevance of the registered incidental findings. Technical Examination Parameters and Standard Operating Procedure All studies were acquired on a Morita 3D Accuitomo 170 CBCT (J. Morita Corp., Kyoto, Japan). This CBCT device consists of a solid frame construction with a 360 • rotatable flat-panel detector. The patient was positioned on a customizable examination chair with fixations for the head and the chin to reduce moving artifacts. After performing two scout images, the FOV was selected out of nine available settings, ranging from 40 × 40 mm 2 to 120 × 170 mm 2 . The voxel sizes ranging from 80 to 260 µm could be selected, depending on the chosen FOV. The indication of the CBCT examination was checked for every patient by a radiologist with expertise. In the vast majority of all cases, an FOV of 100 × 100 mm 2 (275 patients, 73%) was selected. A voxel size of 250 µm was chosen in 82.4% (308 patients) of all cases for image reconstruction. In summary, 309 examinations were performed with an FOV of 100 × 100 mm 2 or larger and a voxel size of 250 µm or more, whereas 65 patients were examined with an FOV < 100 × 100 mm 2 and a voxel size of less than 250 µm, respectively (Table 1a,b). The average amperage was 6.5 mA (SD 1.3 mA) with a maximum of 10 mA and a minimum of 3 mA. The average dose length product (DLP) was 116.43 mGy × cm (SD 28.03 mGy × cm) with a maximum of 246 mGy × cm and a minimum of 52 mGy × cm, wherein 50% of the values were between 87 and 140 mGy × cm ( Table 2). The software i-dixel, offered by the manufacturer Morita, was used for the digital analysis of the three-dimensional datasets including multiplanar, dental, and 3D volume rendering reconstructions. 52 1 70 13 72 1 87 97 105 41 122 127 140 63 154 3 157 9 175 15 215 1 246 3 Evaluation of the Results Each CBCT examination was assessed with regard to their justifying indication, based on the guideline of the European Commission concerning "Cone Beam CT for Dental and Maxillofacial Radiology" [18]. Multiple primary indications were possible. The referring physicians were unbundled into their disciplines. Two radiologists (20 years and 5 years of experience in maxillofacial radiology at the time of data acquisition) evaluated all CBCT examinations in a consensus regarding incidental findings. The findings were split up into the following categories: 1. dental, 2. osseous, 3. soft tissue, 4. temporomandibular joints (TMJ), and 5. paranasal sinus airways. This classification system was chosen in accordance with the German ICD-10 [19]. In a consensus evaluation in cooperation with a board certified dentist, the incidental pathologies were classified with regard to their presumed therapeutical relevance according to a three-step scale: findings were rated as "red" if therapy was supposed to be necessary, "yellow" if follow-up was presumed to be sufficient, and "green" if there was no further treatment required. Subgroup Analysis In a second part of the study, a subgroup of 54 patients was selected, who were send to CBCT by the four main referring physicians with long-lasting professional experience. The referring physicians were interviewed by questionnaire on the following statements: 1. did the CBCT examination provide a new diagnosis and/or a new result? (YES or NO) and 2. has there been any change in therapeutic management affected due to a new diagnosis or new result? (YES or NO)? Statistical Methods Continuous data were reported as mean, standard deviation, and min-max. Ordinal and categorical data were analyzed as absolute and relative frequencies. Additionally, pie charts were used to visualize the distribution of categorical data. For group comparisons of categorical data, the chi square test or Fisher's exact test was used as appropriate. Logistic regression analysis was used to determine associations of FOV ≥ 100 × 100 mm 2 , implantologic indication, radiation exposure, and dental findings. A two-sided p-value ≤ 0.05 was considered to be statistically significant. Because of the explorative nature of this study, all results from statistical tests had to be interpreted as hypothesis-generating and not confirmatory. An adjustment for multiple testing was not made. Statistical analysis was performed with SAS, version 9.4 (SAS Institute, Cary, NC, USA). Results In total, 374 patients including 209 women and 165 men were examined. Patients were 8-90 years old with an average age of 50.9 (±22.3) years. Referring Physicians and Primary Indications The patients were referred by a total of 59 physicians, including 46 (78%) dentists and 13 (22%) non-dentists, and almost the half of the latter was composed of otolaryngologists. Dental Findings In total, 292 red-, 240 yellow-, and 245 green-rated dental incidental findings were registered. More than 50% of all patients had at least one red-rated dental pathology. Subgroup analysis of red pathologies revealed a significant influence of the FOV: when the FOV was <100 × 100 mm 2 , 40.0% of patients showed at least one incidental finding compared to 54.7% of all patients who were examined with an FOV ≥ 100 × 100 mm 2 (p = 0.05). There was no significant difference in the detection of incidental findings between examinations with implantologic indication (77.0%) vs. all other indications (84.2%) (p = 0.08). However, 60.7% of the patients with an implantologic indication had red-rated findings whereas less than the half (43.2%) of the other group had red-rated incidental dental findings (p < 0.01). Multiple logistic regression analysis showed significant influence of the implantologic indication with more detected relevant IF. Odds ratio for red IF was 1.88 (95% CI: 1.22 to 2.88). Figure 1 shows an illustrative case of periapical disease. Dental Findings In total, 292 red-, 240 yellow-, and 245 green-rated dental incidental findings were registered. More than 50% of all patients had at least one red-rated dental pathology. Subgroup analysis of red pathologies revealed a significant influence of the FOV: when the FOV was <100 × 100 mm 2 , 40.0% of patients showed at least one incidental finding compared to 54.7% of all patients who were examined with an FOV ≥ 100 × 100 mm 2 (p = 0.05). There was no significant difference in the detection of incidental findings between examinations with implantologic indication (77.0%) vs. all other indications (84.2%) (p = 0.08). However, 60.7% of the patients with an implantologic indication had red-rated findings whereas less than the half (43.2%) of the other group had red-rated incidental dental findings (p < 0.01). Multiple logistic regression analysis showed significant influence of the implantologic indication with more detected relevant IF. Odds ratio for red IF was 1.88 (95% CI: 1.22 to 2.88). Figure 1 shows an illustrative case of periapical disease. Paranasal Sinuses and Airways Findings In total, 140 red-, 71 yellow-, and 173 green-rated incidental findings were detected in the paranasal sinuses and airways. Of all patients, 32% had at least one red-rated pathology of the paranasal sinuses and airways. The detection rate of incidental findings in total was more than doubled when using an FOV ≥ 100 × 100 mm 2 compared to an FOV < 100 × 100 mm 2 (63.4% vs. 29.2%, p < 0.01). When focusing on red-rated findings, the difference was even more pronounced: almost three times more incidental findings were registered when using an FOV ≥ 100 × 100 mm 2 compared to an FOV < 100 × 100 mm 2 (36.9% vs 12.3% p < 0.01) (Supplementary Table S1). The primary indication "implantology" for CBCT had a lower but still statistically significant influence on the detection rate of red-rated incidental findings. Multiple logistic regression analysis showed similar results (Odds ratio 5.54 (95% CI: 2.47 to 12.43), p < 0.01), favoring the FOV ≥ 100 × 100 mm 2 with more detected relevant IF (Supplementary Table S2). Figure 2 shows an illustrative case of a maxillary sinusitis, probably in the context of a fungal infection. at tooth 12 (arrows). (b) Parasagittal reconstruction of the upper right jaw showed a polypoid mucosal swelling at the bottom of the right maxillar sinus (white arrows), probably induced by an interradicular resorption at tooth 16 (black arrow). Paranasal Sinuses and Airways Findings In total, 140 red-, 71 yellow-, and 173 green-rated incidental findings were detected in the paranasal sinuses and airways. Of all patients, 32% had at least one red-rated pathology of the paranasal sinuses and airways. The detection rate of incidental findings in total was more than doubled when using an FOV ≥ 100 × 100 mm 2 compared to an FOV < 100 × 100 mm 2 (63.4% vs. 29.2%, p < 0.01). When focusing on red-rated findings, the difference was even more pronounced: almost three times more incidental findings were registered when using an FOV ≥ 100 × 100 mm 2 compared to an FOV < 100 × 100 mm 2 (36.9% vs 12.3% p < 0.01) (Supplementary Table S1). The primary indication "implantology" for CBCT had a lower but still statistically significant influence on the detection rate of redrated incidental findings. Multiple logistic regression analysis showed similar results (Odds ratio 5.54 (95% CI: 2.47 to 12.43), p < 0.01), favoring the FOV ≥ 100 × 100 mm 2 with more detected relevant IF (supplementary table 2). Figure 2 shows an illustrative case of a maxillary sinusitis, probably in the context of a fungal infection. Osseous Findings In total, 33 red-, 54 yellow-, and 342 green-rated osseous incidental findings were detected. About 8% of all patients had at least one red-rated osseous pathology. Significantly more osseous incidental findings were registered when applying an FOV ≥ 100 × 100 mm 2 compared to an FOV < 100 × 100 mm 2 overall (73.8% vs. 46.2%, p < 0.01). When focusing on red-rated incidental findings, the total number of these findings was low, and the differences were not significant when comparing FOV ≥ 100 × 100 mm 2 vs. FOV < 100 × 100 mm 2 (9.1% vs.: 4.6%, p = 0.62). The primary indication for CBCT examination had significant influence on the difference in the frequency detection of osseous incidental findings: among examinations with implantologic indication, 98.4% of CBCT revealed osseous incidental findings compared to only 38.3% on all other indications (p < 0.01). Again, when focusing on red-rated findings, the difference between both indication groups was statistically not significant (8.9% vs. 7.7%, p = 0.85). Multiple logistic regression analysis showed no significant influence of both, indication implantology and FOV, on relevant IF. Osseous Findings In total, 33 red-, 54 yellow-, and 342 green-rated osseous incidental findings were detected. About 8% of all patients had at least one red-rated osseous pathology. Significantly more osseous incidental findings were registered when applying an FOV ≥ 100 × 100 mm 2 compared to an FOV < 100 × 100 mm 2 overall (73.8% vs. 46.2%, p < 0.01). When focusing on red-rated incidental findings, the total number of these findings was low, and the differences were not significant when comparing FOV ≥ 100 × 100 mm 2 vs. FOV < 100 × 100 mm 2 (9.1% vs.: 4.6%, p = 0.62). The primary indication for CBCT examination had significant influence on the difference in the frequency detection of osseous incidental findings: among examinations with implantologic indication, 98.4% of CBCT revealed osseous incidental findings compared to only 38.3% on all other indications (p < 0.01). Again, when focusing on red-rated findings, the difference between both indication groups was statistically not significant (8.9% vs. 7.7%, p = 0.85). Multiple logistic regression analysis showed no significant influence of both, indication implantology and FOV, on relevant IF. Odds ratio was 1.04 (95% CI: 0.49 to 2.23), p = 0.91) for indication implantology and 1.93 (95% CI: 0.55 to 6.80) for FOV, respectively. Figure 3 shows two illustrative cases of an osteoma and an osteoblastic metastasis. Odds ratio was 1.04 (95% CI: 0.49 to 2.23), p = 0.91) for indication implantology and 1.93 (95% CI: 0.55 to 6.80) for FOV, respectively. Figure 3 shows two illustrative cases of an osteoma and an osteoblastic metastasis. Soft Tissue Findings No green-rated and only three yellow-rated soft tissue incidental findings were registered. Merely one red-rated finding of an ill-defined enlarged submandibular lymph node was detected. There was no significant difference in the frequency of incidental soft tissue findings regarding implantologic indications or FOV (p = 1.00). TMJ Findings Incidental pathologic findings of the TMJ were registered in only five patients. Clinical relevance was classified as "yellow" in all of these cases. There was significant influence of indication regarding implantologic indications (0% vs. 1.3%, p = 0.03). All five relevant yellow IF were found in patients without an implantologic indication. Soft Tissue Findings No green-rated and only three yellow-rated soft tissue incidental findings were registered. Merely one red-rated finding of an ill-defined enlarged submandibular lymph node was detected. There was no significant difference in the frequency of incidental soft tissue findings regarding implantologic indications or FOV (p = 1.00). TMJ Findings Incidental pathologic findings of the TMJ were registered in only five patients. Clinical relevance was classified as "yellow" in all of these cases. There was significant influence of indication regarding implantologic indications (0% vs. 1.3%, p = 0.03). All five relevant yellow IF were found in patients without an implantologic indication. Clinical Subgroup Analysis The extended clinical analysis of 54 selected patients showed the following results: in more than half of all these patients (n = 29, 53.7%), 98 incidental diagnoses were newly detected by the CBCT examination, including 25 green-, 19 yellow-, and 54 red-rated pathologies. Within the red-rated findings, dental pathologies were most frequent with 75.9% (n = 41) followed by pathologies of the paranasal sinuses with 18.5% (n = 10) and osseous pathologies with 5.6% (n = 3). Clinical Subgroup Analysis The extended clinical analysis of 54 selected patients showed the following result in more than half of all these patients (n = 29, 53.7%), 98 incidental diagnoses were newl detected by the CBCT examination, including 25 green-, 19 yellow-, and 54 red-rated pa thologies. Within the red-rated findings, dental pathologies were most frequent wit 75.9% (n = 41) followed by pathologies of the paranasal sinuses with 18.5% (n = 10) an osseous pathologies with 5.6% (n = 3). In 19 patients (35.2%), the therapy management was altered due to new diagnose In these particular patients, a total of 63 pathological findings were detected, including 3 (58.7%) red-rated pathologies. Within these red-rated findings, there were 25 patients wit periradicular disease (67.6%) and 5 (13.5%) with pathologies of the paranasal sinuse Seven patients had other pathologies (18.9%) (Figure 4). Pie chart of all incidental findings among those patients whose therapeutic manageme had to be adjusted due to the findings in CBCT and DVT. Presented is a breakdown of the red-rate incidental findings to their etiologic origin. Discussion In accordance with guidelines [18,19] and other studies [20][21][22] regarding a routin patient group, the vast majority of our patients were admitted to three-dimensional im aging with CBCT due to planning implant placement. Before inserting dental implant adequate imaging is demanded and CBCT therefore seems to be the method of choic [23,24]. More than 80% of the examinations that were included in our study were performe with an FOV of 100 × 100 mm 2 or larger. This size of FOV reliably covers the entire maxi lary and mandibular dental arch and allows the evaluation of the adjacent structures o the midface that are potentially relevant for the planning of dental therapies. Alareddy al. published the actual largest number of cases focusing on incidental findings in dent imaging, with an FOV of 130 × 130 mm [9]. Edwards et al. also applied a large but no exactly quantified field of view "…from the roof of the orbits inferiorly to at least the se ond cervical vertebrae" [25]. Other authors described the FOV as "large" [10,11,13,15 Price et al. applied FOV of 150 × 150 mm 2 up to 220 × 220 mm 2 . Our mainly used FOV wit 100 × 100 mm 2 was rather small compared to the studies mentioned above. In some othe studies, a smaller FOV was chosen [16,26,27]. The non-homogenously distributed numbe of examinations with a specific FOV was certainly a limitation of our study. The results o Alareddy et al. showed an incidence of 4.3 pathological findings per examination in a patients, whether they were included in the primary indication or not. Edwards et al. r ported an incidence of 1.97 incidental finding per examination [25]. A pathological findin . Pie chart of all incidental findings among those patients whose therapeutic management had to be adjusted due to the findings in CBCT and DVT. Presented is a breakdown of the red-rated incidental findings to their etiologic origin. Discussion In accordance with guidelines [18,19] and other studies [20][21][22] regarding a routine patient group, the vast majority of our patients were admitted to three-dimensional imaging with CBCT due to planning implant placement. Before inserting dental implants, adequate imaging is demanded and CBCT therefore seems to be the method of choice [23,24]. More than 80% of the examinations that were included in our study were performed with an FOV of 100 × 100 mm 2 or larger. This size of FOV reliably covers the entire maxillary and mandibular dental arch and allows the evaluation of the adjacent structures of the midface that are potentially relevant for the planning of dental therapies. Alareddy et al. published the actual largest number of cases focusing on incidental findings in dental imaging, with an FOV of 130 × 130 mm [9]. Edwards et al. also applied a large but not exactly quantified field of view " . . . from the roof of the orbits inferiorly to at least the second cervical vertebrae" [25]. Other authors described the FOV as "large" [10,11,13,15]. Price et al. applied FOV of 150 × 150 mm 2 up to 220 × 220 mm 2 . Our mainly used FOV with 100 × 100 mm 2 was rather small compared to the studies mentioned above. In some other studies, a smaller FOV was chosen [16,26,27]. The non-homogenously distributed number of examinations with a specific FOV was certainly a limitation of our study. The results of Alareddy et al. showed an incidence of 4.3 pathological findings per examination in all patients, whether they were included in the primary indication or not. Edwards et al. reported an incidence of 1.97 incidental finding per examination [25]. A pathological finding is often considered as "incidental" if it is not in context with the primary indication [10][11][12]28,29]. Moreover, incidental findings with reference to the primary indication might be regarded as incidental, especially if they are asymptomatic. Alareddy et al. documented 943 incidental pathologies in 1000 patients, also not distinguishing if they were inside or outside the region of interest [9]. We detected 2.6 incidental findings per patient with 78.6% of all patients showing incidental findings. In the study of Price et al., 272 CBCT scans revealed 881 incidental findings, equivalent to 3.2 per scan [12]. Caglayan and Tozoglu estimated the overall rate of incidental findings as 92.8% in a group of 207 consecutive patients [20]. Warhekar et al. described only 7.2% incidental findings in a cohort of 795 consecutive patients [30], which stands in contrast to the much higher rate of incidental findings in our study as well as most other studies [16,[20][21][22]26,29]. A possible explanation is that Warhekar et al. analyzed written reports of CBCT examinations instead of performing a systematic image analysis by themselves. Published reports differ significantly concerning the type and localization of incidental findings. Price et al. [12] as well as Caglayan and Tozoglu [10] described pathologies of the airways as the most common incidental finding, comprising 35% and 51.8% of all incidental findings, respectively. In our study, pathologies of the paranasal sinuses and airways comprised only 14.9% of incidental findings. In contrast, dental pathologies were the most frequent incidental findings in our study with 55.3%, which only comprised 11.3% and 26% of the incidental findings in the above-mentioned studies, respectively. Differences between published studies were also found concerning the frequency of indication for a CBCT examination. In contrast to our and most other studies, Caglayan and Tozoglu included only 15 out of 207 patients for implant planning [10]. Despite differences in study concepts, composition of patient groups, or indications, our results are broadly in line with other published data and demonstrated a high frequency of incidental pathological findings in CBCT of the maxillofacial region, whether they were in the region of interest or not [20][21][22]26]. The high incidence of 80% of dental incidental findings in all patients outlines the clinical importance of a meticulous image analysis. More than the half of all findings were classified as clinically relevant. "Red" clinically relevant dental as well as airway incidental findings occurred in 61% of CBCT for implant planning, and thus almost 50% more frequently in patients who were admitted to CBCT for implant planning compared to CBCT for other indications. This might be explained by a generally lower sanitary dental status and a higher incidence of paranasal sinusitis in patients who need dental implants [31,32]. Considering the significantly reduced detection rate of dental incidental findings in CBCT using an FOV < 100 mm, our findings emphasize the application of an FOV of 100 mm × 100 mm 2 , covering both the mandible and the maxilla in the context of an implant planning situation. In nearly every second patient of our subgroup with extended clinical analysis, a new diagnosis was found that was not known before CBCT examination. In two thirds of patients with these new detected diagnoses, the therapeutic management had to be adjusted. This is different to the results of Lopes et al., where most of the detected IF were classified not to undergo further treatment or referral to another professional [22]. Our subgroup analysis confirmed the reduced number of therapeutically relevant findings when using an FOV < 100 × 100 mm 2 , especially when regarding dental incidental findings as well as implantologic indications for CBCT. It could be assumed that at least some of these incidental findings may have been missed if the FOV was limited too close to the site of primary clinical interest [26]. Possible therapeutic complications or even implant failure may result in individual inconveniences for the patients and also monetary consequences for the public health system. This can only be estimated and should be a goal of further studies. Radiologists must deal with a holistic diagnostic work-up covering a clinically reasonable area that is not obligatorily limited to the scope of the referring specialist. This work-up includes especially the detection and description of incidental findings. Radiologists should deal and familiarize with the specific analysis of CBCT to minimize the possible consequences for the patients of missing incidental findings. This emphasizes again the importance of close collaborations between medical and dental specialties as Khalifa et al. recently pointed out [29]. Conclusions CBCT of the maxillofacial region revealed a high percentage of clinically relevant additional findings. This study presents data underlining the clinical relevance of these findings. Our results confirmed the influence and dependency of the findings on the FOV and the primary indication, especially for implant planning. The "incidental" findings induced a change of therapy in more than one in three patients. Due to a high number of clinically relevant incidental findings in CBCT for implant planning, an FOV of 100 x 100 mm was concluded to be recommendable for this indication. A meticulous analysis of the entire FOV is essential. Conflicts of Interest: The authors declare no conflict of interest.
2022-04-23T15:17:57.666Z
2022-04-20T00:00:00.000
{ "year": 2022, "sha1": "3b3338263db98fd2f178fe3dd0bd4a23163bf36e", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-4418/12/5/1036/pdf?version=1650965953", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "66319dc04675501d33ea96de61f79766e75d7383", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
29985166
pes2o/s2orc
v3-fos-license
Predicting Hearing Loss from Otoacoustic Emissions Using an Artificial Neural Network 28 Predicting Hearing Loss From Otoacoustic Emissions Using An Artificial Neural Network Rouviere Normal and impaired pure tone thresholds (PTTs) were predicted from distortion product otoacoustic emissions (DPOAEs) using a feed-forward artificial neural network (ANN) with a back-propagation training algorithm. The ANN used a map of present and absent DPOAEs from eight DP grams, (2fl -f2 = 406 4031 Hz) to predict PTTs at 0.5, 1,2 and 4 kHz. With normal hearing as < 25 dB HL, prediction accuracy of normal hearing was 94% at 500, 88% at 1000, 88% at 2000 and 93% at 4000 Hz. Prediction of hearing-impaired categories was less accurate, due to insufficient data for the ANN to train on. This research indicates the possibility of accurately predicting hearing ability within 10 dB in normal hearing individuals and in hearing-impaired listeners with DPOAEs and ANNsfrom 500 4000 Hz. INTRODUCTION David Kemp (1978) first described otoacoustic emissions (OAE) from the human ear and ignited tremendous interest in measurements of these emissions to develop another objective diagnostic test of hearing.Distortion product otoacoustic emissions (DPOAEs) are relatively easy measurable sinusoids, recordable in the occluded ear canal during the simultaneous stimulation of two primary pure tone frequencies, fl and f2 with f2 > fl.The current view on DPOAE generation is that these active responses from the cochlea have two main sources of energy (Knight & Kemp, 1999;Mauermann, Uppenkamp, Van Hengel, & Kollmeier, 1999a;Mauermann, Uppenkamp, Van Hengel, & Kollmeier, 1999b;Talmadge, Long, Tubis, & Dhar, 1998).The first primary source of DPOAE energy is the result of nonlinear interaction between the two primary frequencies on the basilar membrane at the f2 place, also referred to as the generation site.The second source of DPOAE energy is caused by the reflection of the coherent waves at the 2fl -f2 frequency place, also referred to as the re-emission site.The measured DPOAE in the ear canal is the result of the interference of both these sources.It has also recently been postulated that there are two mechanisms responsible for DPOAE generation: DPOAEs consist of a mixture of nonlinear energy arising from two locations on the basilar membrane as well as linear coherent reflection off preexisting micromechanical impedance pertubations (Kalluri & Shera, 2001). The correlation between DPOAEs and hearing sensitivity has kept many researchers occupied in the last two decades (Bonfils, Avan, Londero, Trotoux, & Narcy, 1991;Harris & Probst, 1991;Kimberley & Nelson, 1989; Kummer, Janssen, & Arnold, 1998; Martin, Ohlms, Franklin, Harris, & Lonsbury- Martin, 1990;Nieschalk, Hustert, & Stoll, 1998;Probst & Hauser, 1990;Smurzynski, Leonard, Kim, Lafreniere, Marjorie, & Jung, 1990).The quest to predict pure tone thresholds (PTTs) accurately with DPOAEs arises not from the need to replace existing conventional behavioral evaluation procedures, but to aid in the assessment of pure tone sensitivity in difficult-to-test populations such as neonates, infants,.malingerers and the critically ill.To determine PTTs in special populations with objective physiologic measurements such as tympanometry, the acoustic reflex, ABR and OAEs, procedures are often costly, require a large amount of time (Northern, 1991), highly trained and specialized personnel and sometimes involve sedation (Musiek, Berenstein, Hall III, & Schwaber, 1994).Above all, current objective procedures such as ABR, have a limited frequency area in which hearing sensitivity can be determined accurately (Weber, 1994).There is therefore a definite need for .anobjective, reliable, rapid and economic test of hearing that evaluates hearing sensitivity across a wide range of frequencies to aid in the assessment of difficult-to-test populations. The main aims of most previous studies were attempts to categorize PTTs with DPOAEs as normal or impaired (Hurley & Musiek, 1994;Kimberley, Kimberley, & Roth, 1994a;Kimberley, Hernadi, Lee, & Brown, 1994b) or to gain more information regarding the site-of-lesion in diagnostic audiology (Moulin, Bera, & Collet, 1994;Ohlms, Lonsbury-Martin, & Martin, 1990;Robinette, 1992;Tanaka, O-Uchi, Arai, & Suzuki, 1987).Most researchers, however, found it extremely difficult or even impossible to predict impaired PTTs or to categorize hearing sensitivity at low frequencies as normal or impaired with DPOAEs (Gorga, Neely, Bergman, & Beauchaine, 1993;kimberley et al., 1994b;Stover, Gorga, & Neely, 1996;Zhao & Stephens, 1998).This unsatisfactory prediction of PTTs with DPOAEs is probably due to thelarge number of DPOAE stimulus parameters that influence optimal measurement (Bonfils et al., 1991;Gorga, et al., 1993), the complex non-linear nature of the measured responses (Kummer et al., 1998;Nakajima, Mountain, & Hubbard, 1998) and the inability of conventional statistics to address this problem sufficiently (Kimberley et al., 1994a).Some of the previous studies that attempted to classify hearing sensitivity with DPOAEs as normal or impaired will be reviewed shortly.Kimberley and Nelson (1989) determined the correlation between DPOAE threshold and PTTs in 21 ears (11 normal, 10 with a degree of sensorineural hearing loss) using an f2/fl ratio of 1.2 and measuring DPOAE input-output (I/O) functions from 30 -80dB SPL in .6dBsteps.They claimed that DPOAE thresholds predicted PTTs within lOdB over a range of sensory thresholds from 0 -60dB SPL for the frequencies 700-6000 Hz.This was the first report of such an accurate prediction.Kimberley et al. (1994b) predicted hearing status in normal and hearingimpaired ears with DPOAEs at six frequencies ranging from 1025 -5712 Hz.The significance of variables such as DPOAE levels, age and gender were determined in the definition of normal versus abnormal PTTs and then applied to a new set of unfamiliar data to determine their predictive accuracy at each frequency.Classification accuracy of normal hearing varied from 71% at 1025 to 92% at 2050 Hz.Kimberley et al. (1994a) used an artificial neural network (ANN) approach to predict PTTs with DPOAEs and prediction accuracy varied from 57% correct classification of hearing impairment at 1025Hz to 100% at 2050Hz when normal hearing was defined as PTTs < 20dB HL.Overall classification accuracy was 80% for normal PTTs and 90% for impaired PTTs.Gorga et al. (1993) also measured the extent to which DPOAEs could accurately distinguish between normal-hearing and hearing-impaired ears.DPOAE levels at 65/55 dB SPL distinguished between normal and impaired subjects at 4000, 8000 and to a lesser extent at 2000 Hz.At 500 Hz, performance was no better than chance due to high biological noise levels such as breathing and swallowing.From various reports it became clear that there are numerous DPOAE and demographic features that influence predictive accuracy of PTTs (Avan & Bonfils, 1993;Gaskill & Brown, 1990;Kimberley et al., 1994b, Mills, 1997;Moulin et al., 1994;Stover et al., 1996).Features mentioned in these studies include the DPOAE amplitude versus threshold correlation with PTTs, PTT frequency correlation to the frequency of fl, f2, 2fl -f2 or the geometric mean of the primaries, the level of stimuli to evoke DPOAEs and possible incorporation of DPOAE amplitudes of adjacent frequencies.Moulin et al. (1994) reported that DPOAE threshold, rather than DPOAE amplitude, seemed to be the best parameter in predicting PTTs.Stover et al. (1996) argued that while DPOAE threshold offered a slightly better prediction of PTTs than DPOAE amplitude, it lengthened test times due to difficulty of threshold determination against a noisy background and therefore reduced clinical utility as a tool for identification of Die Suid-Afrikaanse Tydskrifvir Kommunikasieafwykings, Vol. 4 hearing loss.Stover et al. (1996) and Kimberley et al. (1994b) observed that the single most important variable to categorize PTTs as normal or impaired was the DPOAE amplitude in response to moderate level primaries (LI at 55 or 60 dB SPL) with f2 frequencies close to the PTT frequency.Mauermann et al. (1999b) found that the DPOAE fine structure (fine structure has been defined by Talmadge et al., 1998, as quasiperiodic variations in DPOAE amplitude and phase with variations in DPOAE frequency) might be a more sensitive indicator of hearing impairment than DPOAE amplitude alone.In cases where the primary frequencies fell in areas of normal hearing but the DPOAE frequency's corresponding PTT was impaired, the 2fl-f2 DPOAE was still measurable but the fine structure disappeared. When it comes to the analysis of DPOAE data, there are different frequency variables of the DPOAE to use for the prediction of PTTs.Possibilities include the 2fl -f2 frequency, fl, f2 or the geometric mean of the primaries.Some researchers correlated the geometric mean frequency of the primaries with the PTT frequency (Bonfils et al., 1991;Lonsbury-Martin & Martin, 1990;Martin et al., 1990).Others found that DPOAE amplitudes of frequencies at and adjacent to the f2 frequency were most predictive of normal hearing sensitivity (Harris, Lonsbury-Martin, Stagner, Coats, & Martin, 1989;Kimberley et al., 1994a;Kimberley et al., 1994b;Kummer et al., 1998).Recent studies proved that the region close to the f2 place on the basilar membrane is the primary source of DPOAE energy and therefore the f2 frequency is currently the preferred frequency to correlate with pure tone thresholds.(Mauermann et al., 1999a;Mauermann et al., 1999b;Talmadge et al., 1998). Regarding the level of the stimuli to use in measuring DPOAEs for PTT prediction purposes, most researchers agree that moderate and lower level stimuli (below 65 dB SPL) are more suitable for PTT prediction and that lower levels are more frequency specific (Avan & Bonfils, 1993;Gaskill & Brown, 1990;Kimberley et al., 1994b, Mills, 1997).According to Bonfils et al. (1991), when primary intensities higher than 60 dB SPL are used for stimulus generation, it is probable that only passive properties of the cochlea contribute to the emission and that it might not indicate true outer hair cell functioning.It has recently been postulated by Kalluri and Shera (2001) that as there are two mechanisms in the generation of DPOAEs, passive linear components may be present in even a low level DPOAE. Reproduced by Sabinet Gateway under licence granted by the Publisher (dated 2012) Most studies that attempted to predict PTTs with DPOAEs used statistical methods such as multivariate discriminant analysis (Kimberley et al., 1994b), relative operating characteristic curves (Gorga et al., 1993), or routine statistical tests such as j 2 , Student's t test, matched t tests and Pearson's correlation coefficients (Moulin et al.,. 1994).A study by Dorn, Piskorski, Gorga, Neely, & Keefe (1999) compared single-variable and multivariate statistical approaches in PTT prediction with DPOAEs and concluded that superior performance and greater predictive accuracy were obtained with multivariate techniques.Kimberley et al. (1994a) used a different data processing technique called artificial neural networks (ANNs) for the prediction of PTTs with DPOAEs.This experiment compared the classification performance of ANNs with discriminant analysis and found that ANNs outperformed the more traditional statistical technique in the hearing-impaired cases.This was probably because of this algorithm's superior ability to model complex problems, determine nonlinear correlations and excellent predicting abilities (Blum, 1992;Rao & Rao, 1995).Blum (1992) identified three advantages of neural networks over conventional statistical methods that can be applied in the prediction of PTTs with DPOAEs: a) ANNs require less need to determine relevant factors a priori.Irrelevant data has such a low connection strength it has no effect on the outcome and the ANN determines which factors are relevant, b) The sophistication of the ANN model allows it to take hundreds of factors into account simultaneously and this directness of the model enables it to solve complex problems in much less time, c) ANNs are extremely fault tolerant and can learn on noisy or incomplete data, which is often the case with absent or noisy DPOAEs.Kimberley et al. (1994a) categorized hearing as normal or impaired (normal was defined as less than 20dB HL) with DPOAEs and ANNs but made no attempt to categorize the magnitude of the hearing loss by performing any further predictions. The aim for this study was to investigate if ANNs could categorize PTTs into different groups of hearing sensitivity, not by only distinguishing between normal and impaired, but to categorize impaired PTTs into lOdB groups.ANNs were used to determine a correlation between selected measured variables of DPOAEs and PTTs and to apply the correlation to make a prediction.The measured variables included DPOAEs at eleven 2fl -f2 frequencies ranging from 2f 1 -f2 = 406 to 4031 Hz and PTT information at 0.5, 1, 2 and 4kHz.Controlled variables included the frequencies of the primaries, ranging from fl = 500 -5031 Hz with a fixed f2/fl ratio of 1.2 and the levels of LI and L2 ranging from L2 = 60 -25 dB, LI >.L2 by 10 dB.There was experimentation with certain manipulated variables to determine their effect on PTT prediction accuracy.Manipulated variables included experimentation with DPOAE amplitude representation into the ANN: either as a categorical value (with the dummy variable technique, explained in data preparation) or as a fraction of the maximum amplitude value.Subject age was included in the ANN as a categorical value with the dummy variable technique and there was experimentation with age representation in 5-or 10-year categories.The subject variable, gender, was always included in the ANN and was depicted with a one or a zero.Other manipulated variables included ANN topology experimentation with middle level neuron quantities (80, 100 and 120) and error tolerance levels (0.1%, 0.2% and 0.3%).Middle level neuron quantities were doubled for experiments where amplitude was presented as a categorical value (referred to as ALT AMP) to balance the increase in inset neurons, as suggested by Rao and Rao (1995). There is one major difference between this study and most other studies predicting PTTs with DPOAEs that should be clearly stated: This experiment did not use the single f2 value corresponding to the PTT frequency, but used the present and absent responses for all 11 f2 frequencies as ANN input in the prediction of each PTT frequency.The use of frequencies other than the single closest corresponding f2 frequency is not a new idea: Kimberley et al. (1994a) used amplitudes of DPOAE frequencies adjacent to the f2 frequency to correlate to the PTT frequency and found that it improved PTT prediction.Experimentation with the use of the whole spectrum of DPOAE information to predict a single PTT was attempted to enhance prediction abilities of low frequencies, especially 500 Hz, which has proven Reproduced by Sabinet Gateway under licence granted by the Publisher (dated 2012) difficult or impossible for many other researchers (Durrant, 1992;Gorga et al., 1993;Moulin et al., 1994;Probst & Hausser, 1990;Stover et al., 1996). Subject Data was obtained from 70 subjects (120 ears, in some cases only one ear fell within subject selection specifications), 28 males and 42 females ranging from 8 to 82 years old.Subjects were divided into three groups.The first group represented ears with normal hearing and had pure tone averages (PTAs) of 0 to 15 dB HL.For the determination of PTAs, 500, 1000, 2000 and 4000 Hz were taken in consideration.Four thousand Hertz was also included since it was one of the frequencies to be predicted.The second group represented mild hearing loss and had PTAs between 16 and 35 dB HL and the third group represented moderate hearing loss with PTAs between 36 and 65 dB HL.There were 46 ears in the first group, 33 in the second and 41 in the third group.The patterns of hearing loss of this population are presented in Table 1.Age and gender distribution of subjects can be seen in Figure 1. Sampling Subjects were drawn from a private practice in Audiology, the clinic at the University of Pretoria, and a school for the hard of hearing.The main aims of the study and the procedure for obtaining data were described and subjects were asked if they would be willing to participate. Subject selection criteria The selection criterion was normal middle ear functioning.Subjects demonstrating type A tympanograms with static immittance between 0.3 to 1.6 mmhos (measured at 226 Hz) and a peak (or point of maximum admittance) between -100 daPa to 25 daPa were accepted for this study. There was no selection criterion regarding age or gender.Only the pediatric population was excluded from this study due to differences in middle ear properties such as canal length, canal volume and middle ear reverse transmission efficiency that may have caused differences in DPOAE amplitudes (Lee, Kimberley, & Brown, 1993). Subject selection procedure If a subject agreed to participate in the study, a brief interview obtained limited background information such as subject name, gender and date of birth, information regarding hearing status and family history of hearing loss, history of middle ear problems, noise exposure, tinnitus or vertigo and medications currently used.The interview was followed by otoscopic examination, tympanometry and pure tone audiometry.Pure tone audiometry was performed in a sound proof booth on a GSI 60 audiometer.When PTTs exceeded 10 dB HL, pure tone bone conduction was also performed.Threshold determination was in 5 dB steps and a threshold was defined as the lowest hearing level with a minimum of two out of three responses at the specified dB level (Yantis, 1994).If a subject met the subject selection criteria, DPOAE measurements followed in a quiet room.The GSI 60 audiometer and GSI 28-A tympanometer met calibration requirements and the DPOAE probes were calibrated for the quiet room in which testing was performed. Specification of dpoae stimulus parameters The frequency range evaluated spanned from fl = 500 to 5031 Hz with a f2/fl ratio of 1.2.Three data points per octave were selected as per the Grason-Stadler Inc. DPOAE user manual (1997) for Model GSI-60, resulting in 11 primary frequency pairs. In the present study, eight DPgrams were obtained for each ear.All Reproduced by Sabinet Gateway under licence granted by the Publisher (dated 2012) Reproduced by Sabinet Gateway under licence granted by the Publisher (dated 2012) DATA PREPARATION AND PROCESSING For this experiment, a commercially available feed-forward ANN with a back propagation training algorithm and one hidden layer was used, from Rao and Rao, (1995). An artificial neural network is a computer program that consists of computational units, referred to as neurons.Neurons receive inputs analogous to the impulses that the dendrites of biological neurons receive, each input with its own mathematical value, or weight, indicating the importance of the input.The neuron calculates a total for all inputs, compares it to a threshold value and produces an output, just like a .biological neuron sends an output through its axon.A schematic representation of an artificial neuron can be seen in Figure 2. Several neurons are combined to form • an artificial neural network.A schematic representation of the three-layered type feed-forward network used in this study can be seen in Figure 3. ANNs have two phases of operation, a training phase and a predicting phase.During training, the ANN assigns a weight (mathematical value) to every input it receives, and this weight affects the importance or impact of that input (Blum, 1992;Nelson & Illingworth, 1991).It is through repeated adjustments to the weights that the network learns.At first, weights are assigned at random.The ANN computes the output, compares it with the desired answer, and adjusts the weights repeatedly until the desired answers can be predicted for all the cases in the training set (Medsker, Turban, & Trippi, 1993).When prediction error for the training set is zero or acceptably low, the weights are frozen and the network is then presented with unfamiliar data to make' a prediction based on the learned correlation (Rao & Rao, 1995).The topology of the network, such as the number of neurons in the hidden layer, error tolerance and learning rates were determined by experimentation but was not the result of an exhaustive search, so the possibility exists that different topologies might yield better results.In this study, there was experimentation with the number of hidden layer neurons asj 80, 100 and 120 for all experiments except where amplitude was represented with the dummy variable technique as a categorical value (referred to as the ALT AMP technique).ALT AMP created much more inset neurons and the middle level neurons were doubled to compensate for additional neural network complexity.Acceptable error during training was 0.001, 0.002 and 0.003 (within 0.01%, 0.02% and 0.03% accurate).The Beta value or learning rate parameter was 0.5. To prepare the DPOAE data for ANN training, all DPOAE responses from the eight DPgrams were rewritten in a binary fashion by using the dummy variable technique (Licht, 1998) or 10-year category with ones and zeros: For example in the case of the 10-year category experiment, a 12-year old subject was depicted with a one in the second 10-year category and the other categories with zeros (01000000), an 82-year old subject as (00000001).Gender was depicted with a one or a zero.The amplitude of the DPOAE was either depicted as a fraction -of the maximum DPOAE amplitude (40dB) measured in the study (referred to as AMP 40), a fraction of 100 (referred to as AMP 100) or by using the dummy variable technique by depicting it as a categorical value in one of four lOdB categories (referred to as ALT AMP). For ANN training, DPOAE responses and PTTs of 118 ears were used to learn the correlation between DPOAEs and PTTs.Both ears of a subject were left out for each training phase.(The procedure was repeated for every subject, there were therefore 120 training phases.)In the prediction phase, the ANN was presented with only one of the DPOAEs of the two remaining "unfamiliar" ears.The ANN then predicted that ear's PTT based on the learned correlation of the training phase.This process was also repeated 120 times, to predict each ear once.The output of the ANN was a prediction of pure tone sensitivity at a specific frequency.This prediction however, was not in decibel form, but a categorization of hearing ability into eight lOdB categories. Data analysis consisted of analyzing the actual and predicted values of all 120 ears and to determine how many were predicted accurately within the lOdB category, how many were predicted in an adjacent lOdB category and how many were predicted into a category more than 20dB out. RESULTS Tables 2 to 5 summarize results for the best neural network experiments predicting 500, 1000, 2000 and 4000 Hz.The tables present results in two categories: Correct predictions are classified as within the same lOdB category, one category out represents predictions made into an adjacent lOdB category.Remaining results were predicted more than one lOdB category out.Results are given for each lOdB category as well as overall ANN performance to predict normal hearing.The abbreviations for manipulated variables to outline the design of the specific experiment are explained in the key following each table. PREDICTION OF NORMAL HEARING The prediction of 500 Hz with DPOAEs has been problematic for many researchers due to the rising noise floor below 1000 Hz (Durrant, 1992;Gorga, et al., 1993;Stover et al., 1996).It seems that even normal and near normal ears exhibit no or small DPOAE amplitudes at 500 Hz (Probst & Hauser, 1990).In this study, normal hearing (<25 dB HL) at 500 Hz could be predicted with 94% accuracy.Normal hearing at 1000 Hz was predicted with 88% accuracy, 2000 Hz with 88% accuracy and 4000 Hz with 93% accuracy.The mean false positive rate for this study was 4% and the mean false negative rate was 16%.The false negative rate indicating the sensitivity of the procedure was still unacceptably high for diagnostic purposes (Brass & Kemp, 1994) even though it was lower than reported elsewhere (Gorga et al., 1993;Kimberley et al., 1994b;Stover et al, 1996). Reproduced by Sabinet Gateway under licence granted by the Publisher (dated 2012) Reasons for the improved prediction of normal hearing at low frequencies involve a"' number of possibilities: First, it might be possible that the ANN found significant information in the fine structure of the higher frequency DPOAEs hidden in the pattern of all present and absent responses of the eight DPgrams to enable an accurate prediction at 500 Hz.Another possibility is the fact that PTTs are heavily interrelated and with enough information at the high frequencies, it is possible for the ANN to predict the low frequency as one of a limited number of "audiogram pattern" or "DPOAE pattern" possibilities.It should be ^mentioned however, that this'neural network prediction at 500 Hz is not mere coincidence.Neural networks cannot do magic, a clearcut correlation is needed to enable an accurate prediction.Two unrelated data, sets will produce predictions that Reproduced by Sabinet Gateway under licence granted by the Publisher (dated 2012) approximate random noise.The ANN was therefore able to extract enough information from the DPOAE spectrum to correctly classify normal hearing at 500 Hz with 94% accuracy. PREDICTION OF HEARING LOSS Even though classification of hearing ability as normal or impaired as low as 500 Hz was surprisingly accurate, predictions of categories representing hearing loss were disappointingly poor.Table 6 summarizes overall prediction accuracy across all eight lOdB categories for all frequencies. Although prediction of PTTs within a 10 dB category representing hearing loss was correct only up to 32 % to 40% of the time, prediction into an adjacent Reproduced by Sabinet Gateway under licence granted by the Publisher (dated 2012) (The criterion for a test to be accepted for the GSI-60 DPOAE system for a screening test is that the DPOAE amplitude had to be lOdB above the noise floor or the cumulative noise level had to be -18dB SPL.The maximum number of frames tested in a screening procedure is 400, and if no clear response is measured in that time, the test is scored "timed out" which means that no response was obtained.)It is possible that more responses could be obtained from hearing-impaired subjects if the criterion for test acceptance is lowered to 5 dB for example.The lowered criterion for the acceptance of a test could possibly enhance the number of useable responses from hearing-impaired subjects and might therefore enhance prediction accuracy of categories spanning impaired hearing.This aspect should be further investigated. Another reason for poor prediction accuracy of hearing impaired categories might be that the optimal procedure for data analysis has not yet been identified.It can be hypothesized that another type of neural network with different topologies, learning rules and error tolerances would be able to make more accurate predictions.Another possibility is a complete new form of data processing, such as "genetic programming", inspired by Darwinian invention and problem solving that "progressively breeds a population of computer programs over a series of generations" (Koza, Bennett, Andre, & Keane, 1999, p.3) to find an optimal solution to a problem.These possibilities have yet to be investigated.One definite aspect however, that seems to have influenced the prediction accuracy of hearing ability in categories spanning hearing impairment more than neural network capabilities and the underlying correlation between the two data sets, was the number of ears in every category that the neural network had to train on. Neural networks need enough examples in every category to form representations of how a specific ear's DPOAE type relate to its ( PTT type in order to make accurate predictions.Even though this study attempted to categorize all audiograms into three groups to ensure that hearing impairment, was as well represented as normal hearing, the nature of sensorineural hearing impairment tends to affect certain frequencies more than others, and low frequencies are often normal.In the case of 500 Hz, this leads to the uneven Idistribution of many ears in categories .representingnormal hearing and few ears in hearing .losscategories.At 4000 Hz however, category six (50 + 55dB) had more ears (a total of 20) and was predicted accurately 45% of the time and• within lOdB 30% of the time.Hearing loss at 4000Hz in this category was predicted incorrectly only 25% of the time with a false negative rate of only 2%.The same category at 500 Hz had only 7 ears to train on and prediction accuracy was never correct and within 1 OdB only 14% of the time.If the network, had more ears in every category to train on, prediction accuracy might have been considerably better.< To illustrate this concept, Figure 4 plots the number of ears in every category against prediction accuracy.It clearly demonstrates that the number of ears in every category had an enormous effect on prediction accuracy. I The aim of this study is not to describe the relationship between the number of ears per category and prediction accuracy thereof.However, Figure 4 shows a number of possible relationships merely to illustrate the notion of "more-is-better".The linear fit (in dotted lines) shows that higher numbers of ears lie significantly above expectation.It is worth noting that the relationship cannot be linear, since any line with a slope larger than zero will have to cross the 100% accuracy limit at some point, which is of course not possible. The figure indicates an alternative fit (the solid line) of the form 1/(1 + e" mx b ), that seems better and more intuitive since it starts out low, just like the experimental data, but also asymptotically approaches 100% for large numbers of ears.It has also not been established if this function has any correlation with the sigmoid function (Blum, 1992:39) (1 / (1 + e x ) that was used to normalize output of the separate ANN layers but it seems to be a more likely option than a linear fit. For this data set there is a fairly clear threshold at 32 ears per category where prediction accuracy suddenly surges into the 75% and higher region.Should any of the example relationships in Figure 4 hold, it is expected that a 95% or higher accurate predictions could potentially be made if an ANN receives 80 or more ears per category. CONCLUSION ANNs were able to subtract significant information from the DPOAE spectrum to form an internal representation of the correlation between DPOAEs and PTTs and used that information successfully to predict normal hearing with high levels of accuracy as low as 500 Hz.The unsatisfactory prediction of categories representing hearing loss is most likely due to the shortage of data in certain categories and not poor correlation between DPOAEs and PTTs of hearingimpaired ears or incapability of the neural network to deal with this data set.With more data, it seems possible to predict PTTs within 10 dB from 500 Hz to 4000 Hz, for hearing losses up to 65dB HL. Table 1 : Distribution pattern for different types of hearing loss in the 120 ear data set.Flat configuration up to 2 kHz with >20dB PTT drop in high frequencies 0 10 1 .•Low frequency loss 0.5 -1 kHz more impaired than 2 -4 kHz 2 0 0 Notch Notch shaped loss around 1 -3 kHz 4 3 0 The South African Journal of Communication Disorders, Vol.49, 2002 Figure Figure 2: An Artificial neuron Figure 3 : Figure 3: Schematic representation of a three-layered feed-forward artificial neural network . Data .wastransformed into dichotomous variables indicating the presence or absence of a specific category.Absent DPOAE responses were depicted with a zero and present responses with a one.The pattern of all present and absent responses of the eight DPgrams served as input stimuli for the ANN.The ANN was presented with 88 input stimuli depicting DPOAE information of each ear (the eight DPgrams had 11 frequencies each, resulting in 88 possible DPOAE responses).'The age variable was depicted in the same binary fashion as input into the ANN by indicating the 5 Die Suid-Afrikaanse Tydskrif vir Kommunikasieafwykings, Vol.49, 2002 increment presented to ANN in 5-year categories ., LF pres -» Low frequency DPOAEs (<1000 Hz) included in ANN ι ALT AMP-» Amplitude of DPOAE presented as categorical value with dummy variable technique Mid = 200-»Number of middle level neurons in ANN topology Err = 0.003 -t Error tolerance of ANN within 0.03% accurate » Age increment presented to ANN in 5-year categories ., LF pres -» Low frequency DPOAEs (<1000 Hz) included in ANN ι ALT AMP-» Amplitude of DPOAE presented as categorical value with dummy variable technique Mid = 200-»Number of middle level neurons in ANN topology Err = 0.003 -t Error tolerance of ANN within 0.03% accurate 0» Age increment presented to ANN in 5-year categories ., LF pres -» Low frequency DPOAEs (<1000 Hz) included in ANN ι ALT AMP-» Amplitude of DPOAE presented as categorical value with dummy variable technique Mid = 200-»Number of middle level neurons in ANN topology Err = 0.003 -t Error tolerance of ANN within 0.03% accurate * There were no ears in category eight, largest hearing loss measured for 1000 Hz was 65dB HL. The South African Journal of Communication .Disorders, Vol.49; 2002 Figure 4 : Figure 4: Prediction accuracy and ear count correlation.The South African Journal of Communication Disorders, Vol.49, 2002 were no ears in category eight, largest hearing loss measured for 500 Hz was 65dB * * There HL. De Waal, R.(1998).The use of artificial neural networks to predict pure tone thresholds in normal and hearing-impaired ears with distortion product otoacoustic emissions.Unpublished master's thesis, University of Pretoria, South Africa.De Waal, R. (2000).Objective prediction of pure tone thresholds in normal and hearing-impaired ears with distortion product otoacoustic emissions Die Suid-Afrikaanse Tydskrif vir Kommunikasieafwykings, Vol.49, 2002
2018-04-03T01:34:37.697Z
2002-12-31T00:00:00.000
{ "year": 2002, "sha1": "50a9e18ca32c30550f8902ca23b87af54137c045", "oa_license": "CCBY", "oa_url": "https://sajcd.org.za/index.php/sajcd/article/download/215/314", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "c7440cc27752051634bb732f335d5b6641f4a9e6", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
245180118
pes2o/s2orc
v3-fos-license
Impact of Motivational Interviewing on Parental Risk-Related Behaviors and Knowledge of Early Childhood Caries: A Systematic Review Background: Behavior is important in dental disease etiology, so behavioral interventions are needed for prevention and treatment. Motivational interviewing (MI) has been proposed as a potentially useful behavioral intervention for prevention of early childhood caries. Methods: Studies have evaluated the effectiveness of MI on reduction of the risk-related behaviors for early childhood caries (ECC) compared to dental health education (DHE) The aim of this systematic review was to assess the scientific evidence on MI applied to change parental risk-related behaviors. The potentially eligible studies involved the assessment of caries-related behaviors in caregivers receiving MI. Electronic search of English published literature was performed in February 2020 in the Scopus, Cochrane, PubMed, and Embase databases. Assessment of risk of bias was done by the Cochrane risk of bias tool. Results: Of 329 articles retrieved initially, seven were eligible for inclusion in this review. Four studies evaluated the behavior of tooth brushing and four studies assessed the cariogenic feeding practice, while only one study investigated the behavior of checking teeth for pre-cavities. Moreover, two studies examined dental attendance for varnish fluoride use and oral health-related knowledge. It was not possible to perform a meta-analysis. Conclusions: Generally, results support the application of MI to improve the “dental attendance behavior for fluoride use” and participants' knowledge. However, the results were inconclusive for other behaviors. We need further and better designed interventions to completely evaluate the impact of MI on specific ECC-related behaviors. Introduction Early childhood caries (ECC) is a preventable illness that is defined as the existence of one or more decayed, missing, or filled teeth (due to caries) in the primary teeth in children aged less than six. [1] Although the most common preventive approach for child caries is parental education, research does not support the efficacy of merely parental education in decreasing ECC. [2,3] Evidence shows that providing the individuals with accurate information may help them to modify their behaviors, but this method alone will not cause behavior change. [4] It has been found that education alone is not effective because a health professional's direct persuasion is often carried out with no regard for the parents' preparation to modify their behaviors. [5] Dental health education has been known as the gold standard among different non-invasive preventive interventions for children at the risk of developing caries. In this approach, parents or caregivers will be given information about children's dental health via pamphlets, posters and media campaign. [6] Motivational interviewing (MI) is one of the methods of behavior change which reduces the individual's resistance to change. [7] It helps people to explore and resolve their uncertainty toward change as a client-centered but directive counseling strategy. [8] This strategy has been successfully applied to various health behaviors such as substance use disorders, [9,10] smoking, [11,12] diet and exercise, and medication dependence. [13] Moreover, it has been reported that MI is efficacious in guiding patients to apply changes to the oral health-related behaviors like snacking and tooth brushing habits. [14][15][16] A systematic review in 2014 on the efficacy of MI in enhancing oral health showed inconclusive effect of MI on most oral health outcomes. The authors argued that better interventions should be developed to completely evaluate the effect of MI on oral health and determine a proper dose for motivational counseling. Moreover, further interventional studies on specific oral health-related behaviors and systematic reviews have been suggested to target this area of research from a narrower perspective. [7] Given the new publications in recent years, this study was aimed to systematically review the randomized clinical trials (RCTs) to assess the effect of MI-based parental interventions on reducing the ECC-related behaviors compared to traditional dental health education (DHE) and to determine their limitations. Methods This systematic review was performed based on the guidelines of Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA statement). [17] This review was supported by Isfahan University of Medical Sciences Research under award code of IR.MUI.RESEARCH.REC.1399.228. Search strategy A search for relevant studies was done after defining a well-focused PICO question and inclusion/exclusion criteria [ Table 1]. The articles assessed were conducted on the parents/caregivers (P, population) trained by MI after the birth of their children (I, intervention), compared to no education or traditional DHE provided following the birth of their children (C, comparison), and their behavior modifications was evaluated (O, outcome). The selection of key words was based on the MeSH and non-MeSH terms in simple or multiple conjunctions. The Embase, Scopus, Cochrane, and PubMed databases were searched, with no filters applied except for language, i.e., only the studies in English language were evaluated. Moreover, a manual search was performed to retrieve the probably missing articles. The latest date of database search in this research was February 2020. The strategies of database search are presented in Table 2. Selection of studies Two authors (ShM and AK) searched the above-mentioned databases independently using the developed search strategy. Endnote software version 8 (Thomson Reuters, NY, USA) was used for eliminating duplicated studies, final confirmation, and cross matching. The authors reviewed the abstracts of the articles and selected the articles that met the inclusion criteria. The full-texts of the chosen abstracts were screened, as a result of which some studies were excluded. The correlation coefficients between the search results of two authors regarding the abstract and full-text were 0.93 and 1, respectively. In the case of any disagreements between two authors, the third author (RF) evaluated the disagreements and made the final decision. Assessment of risk of bias Each study was assessed for inner methodological risk of bias based on the Cochrane collaboration tool. This tool takes into account the selection, performance, detection, attrition, reporting, and other sources of bias (including industry-related bias or professional interest) and makes use of three reporting terms: high risk of bias, unclear risk of bias, and low risk of bias. Using this approach, each article was then categorized according to the risk of bias. Trials with a high risk of bias in at least one item were considered to have an overall high risk of bias, trials with an unclear risk of bias in one or more major domains were regarded as having a moderate risk of bias, and trials with a low risk of bias in all domains were considered to have an overall low risk of bias. The data gathered for each study included the authors' name, publication year, characteristics of samples, studied groups and their sample size, number of MI sessions, duration of MI sessions, measured outcome, final conclusion, and follow-up duration. Results A flow diagram of the search strategy is shown in Figure 1. The search yielded a total of 329 articles (44 on Embase, 114 on Cochrane, 44 on Scopus, and 169 on MEDLINE (PubMed)). The abstracts of 74 articles were evaluated after excluding the similar and irrelevant ones. Therefore, 14 articles remained for full-text analysis, from which seven [3,[18][19][20][21][22][23] were excluded with reasons presented in Table 3. Finally, the remaining articles [2,5,6,14,15,24,25] were included in the evidence table. The descriptive results and parameters obtained for each study are indicated in Table 4. A detailed assessment of risk of bias is indicated in Figure 2. Due to lack of the blinding of participants and personnel, all articles studied had an overall high risk of bias. Lack of the counselors' blinding explains this bias to some extent. All the reviewed articles were randomized clinical trials (RCTs) that had included a total of 2888 participants in intervention and control groups. They received MI versus no education or traditional DHE, respectively. As for the number of MI sessions, all studies trained the participants in one session except the study of Henahaw et al. [25] in which the mean number of sessions the participants attended was 2.8. The duration of MI sessions was variable from 20 to 45 minutes. Moreover, the follow-up period in four studies was 24 months, [5,6,24,25] while it was 1-8 months in three other trials. [2,14,15] The behavioral outcomes varied among the reviewed studies. Harrison et al. [5] and Weinstein et al. [24] evaluated only the number of visits for varnish fluoride use, but others assessed more parameters such as cariogenic feeding practice, tooth cleaning frequency, and checking for pre-cavities. In addition, some studies evaluated the clinical outcomes. For example, Harrison et al., [5] Henshaw et al., [25] and Manchanda et al. [2] reported the number of decayed, missed, and filled teeth/surfaces. Further, Weinstein et al. [24] reported the incidence of new dental caries. However, this systematic review was not aimed to evaluate these clinical outcomes. Search strategy Pubmed/ cochrane (parent or child or children or preschool children or infants or mother or pregnant women) and (motivational interviewing or motivational counseling or behavior interviewing or motivational change or motivational enhancement therapy or motivational intervention or motivational consultation or direct counseling or client centered counseling or patient centered counseling) and (traditional health education or conventional education or control or education or oral health promotion) and (dental caries or tooth caries or tooth decay or early childhood caries or ECC or oral health-related behavior or oral health-related risks or ECC-related risks or cavitated lesion or non-cavitated lesion or oral hygiene) Scopus/ Embase ("Parent" or "child" or "children" or "preschool children" or "infants" or "mother" or "pregnant woman") and ("motivational interviewing" or "motivational counseling" or "behavior interviewing" or "motivational change" or "motivational enhancement therapy" or "motivational intervention" or "motivational consultation") and ("traditional health education" or "conventional education" or "control" or "education" or "oral health promotion") and ("dental caries" or "tooth caries" or "tooth decay" or "early childhood caries" or "ECC" or "oral health-related behavior" or "oral health-related risks" or "ECC-related risks" or "cavitated lesion" or "noncavitated lesion") Number of visits for fluoride varnish application Two studies investigated this outcome and found that the number of visits for fluoride varnish use was significantly higher in the MI group than in the DHE group after a 24-month follow-up. [5,24] Checking for pre-cavities This behavior was assessed by Ismail et al. over a 2-years follow-up. They indicated that MI approach significantly promoted the caregivers' checking for pre-cavities compared to DHE (P value = 0.03). [6] Isn't reporting behavioral outcome Weinstein 2004 Isn't reporting behavioral outcome Colvara 2018 Isn't reporting behavioral outcome Tooth cleaning frequency As one of the most frequently studied behaviors, four studies investigated tooth cleaning frequency. [6,14,15,25] Naidu et al. [15] showed the significant improvement of this behavior in the MI group versus the DHE group (p < 0.01), but two other studies rejected this result. [6,25] In their study, Freudenthal et al. [14] did not make inter-group comparisons. They showed a significant increase in the frequency of tooth cleaning in the MI group following a one-month follow-up. However, this increase was not statistically significant in the control group. In addition, Manchanda et al. reported a rising trend in toothbrush use as an aid to tooth cleaning in children after an 8-months follow-up in the MI, DHE, and no education groups. [2] Cariogenic feeding practice Four articles evaluated this outcome. [2,6,14,25] Ismail et al. [6] and Henshaw et al. [25] reported that MI did not significantly promote this behavior in the MI group versus the DHE group (P > 0.05 and P = 0.422, respectively). Freudenthal et al. [14] reported that neither MI nor DHE were able to change this behavior in the studied participants after one month, confirming the results of the above study. Manchanda et al. [2] indicated a remarkable decline in "bottle feeding at demand" and "night feeding through bottle" after 8 months in both MI and DHE groups. Conversely, the behavior "giving sugary items between meals" was improved. The data of inter-group comparisons were not reported in this research. Freudenthal et al. [14] evaluated the "shared utensil use behavior" and reported a significant decline in MI group after one month (p = 0.035). However, this declining trend was not statistically significant in the DHE group (p = 1.00). Knowledge Naidu et al. [15] and Henshaw et al. [25] examined the impact of MI on the caregivers' improved oral health-related Naidu et al. [15] indicated that the knowledge of mothers undergoing MI significantly increased from baseline after 4 months in four items of "appropriate size of toothpaste for children", "the safest time to give snacks", "appropriate position for tooth brushing", and "appropriate number of visits for fluoride varnish" (p < 0.05). However, there was only a significant increase in the control group for the last two items (p < 0.001). Discussion A new field of research in dentistry is application of brief interventions. There has been special interest in the application of MI owing to its efficacy in modifying the behavior in domains such as addiction, diabetes management, and smoking cessation. [26] MI has been found to be efficient in altering specific behaviors in specific settings. However, it is essential to acknowledge behavior change as a science and find out the special mechanisms involved in behavior change. [27] In contrast to other fields, caregivers undergo MI for ECC, but children are intended to benefit from it. Moreover, the disease complexity may make it challenging to understand the MI mechanisms. [25] Several behaviors on the part of the caregivers, such as cariogenic feeding practice, tooth cleaning frequency, and dental attendance, have been found to be associated with ECC. The studies recruited in this systematic review vary drastically in their assessed behaviors, number of participants, follow-up period, and MI protocol. Dental attendance for fluoride varnish MI showed a prominent impact on dental attendance for varnish fluoride use. Yet, other behavioral outcomes were assessed by questionnaires, dental attendance was the only outcome evaluated through the patient's dental documents. Weinsten et al. [24] and Harrison et al. [5] reported that families undergoing MI attended the fluoride varnish therapy much more routinely than the control families, indicating that MI mothers welcomed these fluoride varnish visits much more than the control mothers. These authors also reported the reduced clinical incidence and severity of childhood caries in MI group, which can be due to more use of fluoride varnish. Fidelity to MI protocol was evaluated by reviewing the audiotapes in both studies, where the participants were low-income south Asian immigrants. A systematic review in 2013 showed that not being dependent on the risk of caries, use of fluoride varnishes twice to four times a day either in the permanent or primary dentition were linked to a significant decrease in caries rate. [28] Reidy et al. [20] also evaluated dental attendance for preventive and restorative treatments in children following MI intervention. The participants were volunteer pregnant women who underwent pre-and post-natal MI or DHE. They reported that dental attendance did not increase significantly from baseline either in the MI or DHE groups. Since pregnant women have high motivation for active participation in the preventive care for their children, [29] the study of Reidy et al. [20] was different from other studies in mothers' baseline motivation and preparedness for change. Furthermore, choosing volunteers as participants led to a high-risk of selection bias in this study. Tooth cleaning frequency Tooth cleaning frequency was studied as an outcome via various variables. In the studies of Ismail et al. [6] and Henshaw et al., [25] the tooth brushing behavior twice per day was not improved by MI intervention versus DHE following a two-year follow-up. Ismail et al. showed no significant increase in children's tooth brushing at bedtime. Moreover, MI did not significantly reduce the clinical rate of ECC in these studies. Ismail et al. [6] argued that the broad nature of specific changes in their study might have prohibited the potential of MI to influence certain oral health behaviors. In addition, they indicated an improvement in the caregiver's oral health behaviors related to checking the child for "pre-cavities", which was linked to the researchers' more focus on pre-cavities and their prevention than other behaviors. In their study, Naidu et al. [15] reported children's weekly brushing as an outcome and showed that MI enhanced this behavior after four months. However, the number of participants was much lower than that of the studies of Ismail et al. [6] and Henshaw et al. [25] In addition, this study did not evaluate fidelity to MI protocol, while two other studies used motivational interviewing treatment integrity (MITI) code, the most frequently used tool for evaluating MI fidelity in RCTs. [30] Cariogenic feeding practice Another outcome evaluated in the studies was the caregivers' cariogenic feeding practice. Variables selected for this assessment included "the frequency of sweets used for reward or behavior modification", [14] "bottles given while awake [14] or at bedtime [2,14] ", "sugar sweetened beverage intake", [25] and "providing child with non-sugared snacks and healthy meals". [6] Previous studies have revealed that ECC is higher in the bottle-fed children. [31] Moreover, use of sugar sweetened beverages elevated dental caries rate among children and adolescents. [27] Two studies with a larger sample size and a longer follow-up period reported no significant change between MI and DHE groups in cariogenic feeding behavior. [6,25] They also showed MI intervention versus DHE had no significant effect on the clinical rate of ECC. Showing a high risk of reporting bias, Manchanda et al. [2] and Freudenthal et al. [14] did not report the inter-group comparisons. In addition, these investigations did not assess fidelity to MI intervention. Conducting a study on volunteer participants, Freudenthal et al. [14] also reported a high risk of selection bias. They showed neither MI nor DHE changed the cariogenic feeding behavior after one month, while Manchanda et al. [2] indicated MI and DHE decreased "bottle feeding at demand" and "night feeding through bottle" behaviors after an 8-month follow-up period. The results of MI effect on cariogenic feeding behaviors are inconclusive since various variables have been assessed in these few number of studies. More well-designed studies are suggested to explore this subject. Oral health-related knowledge MI has been shown to improve the participants' knowledge about specific subjects, including women's knowledge of vaginal birth [32] and patients' knowledge of stroke. [33] Regarding the oral health-related knowledge, two studies [15,25] evaluated the impact of MI approach on the caregivers' knowledge. They both indicated the prominent impact of MI. In a well-planned study, Henshaw et al. [25] argued that although the participants' oral health-related knowledge promoted significantly in MI group versus DHE group, it did not translate into significant group differences in the previously mentioned oral health-related behaviors. As for the proper frequency of varnish fluoride use, Naidu et al. [15] reported an improvement in the participants' knowledge after 4 months in both MI and DHE groups. Regarding "the safe time for giving sugary drinks and snacks to children", knowledge improvement was found to be significant merely in the MI group. Follow-up telephone calls were used as boosters in MI intervention in all studies. Harrison et al. [22] showed further follow-up might decrease the chance of relapse in the MI-related behavior changes. With respect to quality assessment, all studies were found to have random sampling based on Cochrane collaboration tool for assessment of risk of bias. Regarding allocation concealment, the data in the majority of studies were inadequate, so they were reported to have an unclear risk of bias. [2,15,24,25] In all studies, the blinding of personnel and participants was not possible owing to the nature of motivational counseling. The behavioral outcome was assessed with sufficient blinding of outcome evaluators in all studies, so they had a low risk of detection bias. In their study, Naidu et al. [15] and Ismail et al. [6] revealed a drop-out rate of a >25% and did not provide clear information regarding the balance of remaining samples in the control and case groups; hence, it was graded as having a high risk of attrition bias. Paucity of high-quality studies with similar standard methodology, low number of samples, and short follow-up period were some limitations of this review. Further, we were not able to summarize a quantitative assessment of the articles included because of the heterogeneity of studies. The low number of publications and variety of behaviors evaluated further made the interpretation of available evidence difficult. Considering these limitations, further studies are suggested to evaluate the effect of MI on larger sample sizes using standardized MI protocols for oral health evaluation with high loyalty to the MI spirit. Conclusion Although MI approach showed a significant impact on "dental visit for fluoride varnish" and "participants' knowledge improvement", further well-designed trials are needed to evaluate the impact of MI on the oral health-related behaviors using standardized MI protocol and exploring more specific variables. Why this paper is important: • Behavior is important in dental disease etiology, so behavioral interventions are needed for prevention and treatment. Motivational interviewing (MI) has been proposed as a potentially useful behavioral intervention for prevention of early childhood caries. Studies have evaluated the effectiveness of MI targeting parents/ caregivers for reduction of the risk-related behaviors of early childhood caries (ECC) compared to dental health education (DHE). • It is important to review the outcomes to find out the behaviors that can be improved by MI as well as to reveal the limitations of these studies for considering in the future research. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.
2021-12-16T16:51:09.459Z
2021-12-14T00:00:00.000
{ "year": 2021, "sha1": "067c15f839565014f374340b57456ebf20864433", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/ijpvm.ijpvm_600_20", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "da82bc0f839be4b028f7210b2b6598508c43fea1", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
257252095
pes2o/s2orc
v3-fos-license
Ocular blood flow evaluation by laser speckle flowgraphy in pediatric patients with anisometropia Purpose To determine the differences and reproducibility of blood flow among hyperopic anisometropic, fellow, and control eyes. Methods We retrospectively studied 38 eyes of 19 patients with hyperopic anisometropia (8.2 ± 3.0 years of age) and 13 eyes of eight control patients (6.8 ± 1.9 years). We measured the optic nerve head (ONH) and choroidal circulation using laser speckle flowgraphy (LSFG) and analyzed the choroidal mean blur rate (MBR-choroid), MBR-A (mean of all values in ONH), MBR-V (vessel mean), MBR-T (tissue mean), and sample size (sample), which are thought to reflect the ONH area ratio, area ratio of the blood stream (ARBS). We then assessed the coefficient of variation (COV) and intraclass correlation coefficient (ICC) and compared the differences among amblyopic, fellow, and control eyes in MBR, sample, and ARBS. Results The ONH, MBR-A, MBR-T, and ARBS of amblyopic eyes were significantly higher than those of fellow eyes (P < 0.01, P < 0.05, and P < 0.05, respectively), and control eyes (MBR-A and ARBS, P < 0.05, for both comparisons). The sample-T (size of tissue component) in amblyopic eyes was significantly smaller than that in fellow and control eyes (P < 0.05). Blood flow in the choroid did not differ significantly between the eyes. The COVs of the MBR, sample, and ARBS were all ≤10%. All ICCs were ≥0.7. The COVs of pulse waveform parameter fluctuation, blowout score (BOS), blowout time (BOT), and resistivity index (RI) in the ONH and choroid were ≤10%. Conclusion The MBR value of the LSFG in children exhibited reproducibility. Thus, this method can be used in clinical studies. The MBR values of the ONH in amblyopic eyes were significantly high. It has been suggested that measuring ONH blood flow using LSFG could detect the anisometropic amblyopic eyes. . Introduction The increasing number of patients with refractive error, a known risk factor for amblyopia, has attracted worldwide attention (1). If amblyopia due to refractive error is not treated at the appropriate time in childhood, good vision will not be achieved, resulting in amblyopia. The prevalence of amblyopia is related to income level, age, ethnicity, public awareness, and screening programs; specifically, amblyopia has shown higher prevalence in people with low income, aged over 20 or under 10 years, and located in Europe, Oceania and North America (2). A recent study reported that amblyopia prevalence will increase from 99 million in 2019 to 221 million in 2040 (2). Myopia is more common in Asia, while amblyopia is more common in Europe and North America. Although the distribution of refractive error varies across regions, management of childhood refractive error is becoming increasingly important (3). The prevalence of amblyopia is reported to be 0.74-4.3%, and the most frequent form is anisometropic amblyopia (3,4). Although anisometropic amblyopia occurs when differences in refractive values between eyes cause developmental disorders, resulting in one eye being amblyopic, it has been reported that there are also differences in ocular structure between the right and left eyes (5)(6)(7). In patients with anisometropic amblyopia, the amblyopic eye exhibits a shorter axial length, smaller optic nerve head (ONH) diameter, and thicker choroid (5,6). In previous studies pulsatile ocular blood flow (POBF) and color Doppler ultrasonography have been used to evaluate retrobulbar blood flow in anisometropic amblyopic eyes and reported that blood flow between amblyopic and fellow eyes did not differ significantly (8,9). Laser speckle flowgraphy (LSFG) is a non-invasive technique for measuring ocular blood flow (10-13), and the mean blur rate (MBR) is an indicator of ocular blood flow (14). Many investigators have used LSFG to measure ocular blood flow in patients with glaucoma (14,15), retinal vascular occlusion (16), or diabetic retinopathy (17). LSFG has also been used to study the relationship between ocular blood flow and systemic diseases such as sleep apnea syndrome and chronic kidney disease (18,19). A recent study also reported that MBR and age were significantly correlated, and females have higher MBRs than males (20). However, to the best of our knowledge, there are no published studies on blood flow using LSFG in patients with anisometropic amblyopia other than case reports (21). We hypothesized that differences in ocular structure in patients with anisometropia also affect ocular hemodynamics. The purpose of the present study was to investigate the differences in ocular blood flow attributable to differences in ocular structure among amblyopic, contralateral, and control eyes after assessing the reproducibility of the LSFG measurement value. . . Patients This was a retrospective, cross-sectional observational study, and all patients visited Toho University Omori Medical Center between April 2015 and July 2022. This study was approved by the Ethics Committee of Toho University Omori Medical Center (#M22161) and registered in the University Hospital Medical Information Network (UMIN) (Registry No. UMIN000049300). This study adhered to the tenets of the Declaration of Helsinki. This study was presented on our institutional website and the right to opt out was provided to all parents. This retrospective study comprised 19 amblyopic eyes and their fellow eyes of 19 pediatric patients with hyperopic anisometropic amblyopia [12 males and seven females; 5-15 years of age; 8.2 ± 3.0 years (mean ± standard deviation (SD))] and 13 eyes of 8 pediatric control patients (five males and three females; 5-10 years of age; 6.8 ± 1.9). Hyperopic anisometropic amblyopia was defined as an interocular difference in the cycloplegic spherical equivalent (SE) of 2.00 diopters (D) between the amblyopic and fellow eyes. Moreover, patients with anisometropic amblyopic had a best-corrected visual acuity (BCVA) of 20/20 or better vision due to treatment and did not have strabismus. Pediatric control patients who matched the axial length to the amblyopic eye were defined as those with a visual acuity of 20/20 or better vision and did not have strabismus, anisometropic amblyopia, history of intraocular surgery, cataract, glaucoma, or retinal disorder. We excluded patients who were not cooperative enough for the LSFG examination. . . LSFG examination Although we used the LSFG-baby, a modified version of LSFG that enables measurements with the subject in a supine position, to measure blood flow at the ocular fundus in neonates (22,23), LSFG was performed using the LSFG-NAVI TM (Nidek, Aichi, Japan) in this study. Before examination, the patient's pupils were dilated with 0.4% tropicamide. The LSFG measurement method has been previously described in detail (10, 24). The measurements were conducted three consecutive times, and the ONH and choroid areas were analyzed. All measurements were performed by the same examiner (TI). The LSFG used the MBR as an indicator of blood flow. After the margin of the ONH was manually set ( . . Analysis of reproducibility To determine intra-examiner reproducibility when measured three consecutive times, we assessed the reproducibility of the MBR, number of samples, ARBS, Frontiers in Public Health frontiersin.org . /fpubh. . and nine pulse waveform parameters by determining the coefficient of variation (COV) and intraclass correlation coefficient (ICC). . . Statistical analysis All statistical analyses were performed using JMP ver. 14 software (SAS Institute, Cary, NC, USA). Chi-square tests were used to compare sex. The paired t-test was used to compare differences between the amblyopic and fellow eyes, and a nonpaired t-test was used to compare differences between amblyopic and fellow eyes and control eyes. The correlation between the AL and SE and blood flow was analyzed by Pearson's correlation coefficient. All measurement values are expressed as the mean ± standard deviation (SD), and p < 0.05 were considered significant. . Results In the control group, of the eight patients, three were measured only in one eye due to a lack of cooperation. Thus, the control group included 13 eyes of 16 possible eyes from the 8 pediatric control patients. Tables 1, 2 present the demographic data and clinical parameters. The SEs of amblyopic, fellow and control eyes were 4.91 ± 1.49, 1.81 ± 1.36, and 3.30 ± 2.03D, respectively. The difference in SE between amblyopic and fellow eyes was 3.11 ± 0.81D. The SEs in three groups differed significantly (amblyopic eye vs. fellow eye: P < 0.0001, paired t-test; control eye vs. amblyopic eye and fellow eye: P < 0.05 for both comparisons, non-paired t-test). The AL of the amblyopic, fellow, and control eyes was 21.41 ± 0.93, 22.46 ± 0.96 and 21.52 ± 0.60 mm. The AL in the fellow eye was significantly longer than that in the amblyopic and control eyes (P < 0.01 for both comparisons). Table 3 shows the MBR, ARBS, and sample size results. The MBR-As of amblyopic, fellow, and control eyes were 26.1 ± 3.6, 23.4 ± 3.5 and 22.6 ± 3.5, respectively. The MBR-A of the amblyopic eye was significantly higher than that of the fellow and control eyes (P = 0.0001 and P = 0.0108, respectively, for both comparisons). The MBR-Ts of the amblyopic, fellow, and control . /fpubh. . The data are presented as mean ± standard deviation (95% confidence interval). SBP, systolic blood pressure; DBP, diastolic blood pressure; MABP, mean arterial blood pressure; bpm, beats per minute. . . Blood flow eyes were 11.4 ± 2.1, 10.3 ± 1.7, and 10.9 ± 1.1, respectively. The MBR-T of the amblyopic eye was significantly higher than that of the fellow eye (P < 0.05). The MBR-V did not differ significantly. The ARBS values of amblyopic, fellow, and control eyes were 40.9 ± 6.7, 36.7 ± 6.2, and 34.9 ± 5.9%, respectively. The ARBS of the amblyopic eye was significantly higher than that of the fellow and control eyes, indicating the ratio of vessel components in the amblyopic eyes was higher than that in the fellow and control eyes (P < 0.05). The MBR-choroid did not differ significantly among the three groups. MBR-A, MBR-T, and MBR-V were not significantly correlated with SE and AL. The sample numbers, which represent the size of the optic nerve, reflected in Sample-A of amblyopic, fellow, and control eyes were 37,413 ± 6,154, 39,198 ± 7,873, and 42,858 ± 8,265, respectively. Sample-A in the amblyopic eye was significantly higher than that in the control eye. The Sample-Ts of amblyopic, fellow, and control eyes were 22,211 ± 4,641, 24,438 ± 5,773, and 26,769 ± 7,655, respectively. Sample-T of the amblyopic eye was significantly smaller than those of the fellow and control eyes (P < 0.05, for both comparisons). Sample-V did not differ significantly among the amblyopic, fellow, and control eyes. Table 4 provides the COVs and ICCs for the MBR, sample, ARBS, and pulse waveform parameters in the ONH. The COVs for the MBR, sample, and ARBS were all ≤10%, and the ICCs were all ≥0.7. Among the pulse waveform parameters, the COVs of fluctuation, BOS, BOT, and RI were ≤10%. The ICCs of all pulse waveform parameters were <0.7, except for fluctuations. . . Reproducibility The results of COV and ICC by MBR-choroid showed the same trend as the reproducibility in the ONH. The COV for the MBR choroid were ≤10%, and the ICC were ≥0.7. Among the pulse waveform parameters, the COVs of BOS, BOT, falling rate, and RI were ≤10%. The ICCs of all pulse waveform parameters were <0.7, except for the FAI. . Discussion The findings of the present study demonstrate that measuring of ocular blood flow in pediatric patients using LSFG was reproducible to the same degree as for adults. In the ONH and choroid, the COVs of all MBR values, sample size, ARBS, and even pulse waveform parameters such as the BOS, BOT, and RI were ≤10%. The ICCs of all MBRs, sample sizes, and ARBS scores were ≥0.7. In the ONH, the MBR-A of the amblyopic eye was significantly higher than that of fellow and control eyes. The MBR-T score of the amblyopic eye was significantly higher than that of the fellow eye. Sample-A of the amblyopic eye was significantly smaller than that of the control eye, and Sample-T was also significantly smaller than that of the fellow and control eyes. The ARBS was significantly higher in the amblyopic eyes than in the fellow and control eyes. Thus, the amblyopic eyes showed higher blood flow and smaller ONH size than the control eyes, but amblyopic and control eyes were not significantly different in AL. This is the first study to confirm the reliability of ocular blood flow measurements using the LSFG in pediatric patients. In previous reproducibility studies in adult patients with glaucoma, the COVs ranged from 0.9 to 3.8% and the ICCs ranged from 0.95 to 0.98 (14). The COVs in patients measured in a supine position during surgery ranged from 3.1 to 6.9% (25); the COV for those in an upright position after being in a supine position was 6.7% (26), and the COVs in neonates ranged from 7.7 to 9.7% (23). In the present study, the reproducibility of the MBR in the ONH was 5.9%, which was very close to that observed in studies of adult patients; due to reproducibility with an ICC of ≥0.7, our results suggest sufficient reliability of LSFG for clinical use. In this study, reproducibility in terms of both the COV and ICC was not favorable regarding the skew, rising rate, falling rate, FAI, or ATI. Large deviations in the COVs and ICCs were observed in the BOS, BOT, rising rate, and falling rate. According to Tsuda et al., pulse waves such as those in fluctuation, skew, and FAI in ocular blood flow are highly sensitive to subtle changes (27). In a study of neonates using LSGF-baby, Matsumoto et al. reported that reproducibility of COVs in pulse waves such as those in the fluctuation, skew, FAI, and RI could not be achieved; in pulse waves such as those in the BOS, BOT, rising rate, and falling rate, deviations in COVs and ICCs similar to those obtained in the present study were observed (23). The likely reason for this may be that children have higher heart rates than adults, making them prone to subtle changes in sight lines and body movements at the time of measurement. The ONH in the amblyopic eye was significantly smaller than that in the fellow and control eyes. Because the ONH sample size of the vessels was not significantly different, the difference in size of the ONH was attributed to the difference in size of the tissue. In fact, The data are presented as mean ± standard deviation (95% confidence interval). IOP, intraocular pressure; OPP, ocular perfusion pressure; spherical equivalent (SE). The data are presented as mean ± standard deviation (95% confidence interval). the size of the tissue in the amblyopic eye was significantly smaller than that in the fellow and control eyes, and the ARBS representing the proportion of vessels in the amblyopic eye was significantly higher than that in the fellow and control eyes. Some researchers have reported that the size of the ONH in anisometropic eyes is significantly smaller than that in fellow or control eyes (28-30). The results of the present study are consistent with these findings. Lempert speculated that optic nerve hypoplasia leads to a decrease in ONH size in amblyopic eyes and associated retinal nerve fiber layer (RNFL) thinning, which impairs the anterior visual pathway and reduces visual function (5). However, Huynh and Wang reported that ONH size and RNFL thickness are associated in children, resulting in a small ONH that tends to thin the RNFL (31). The thickness of the RNFL varies depending on the refractive error and axis length, and some reports have shown that there is no significant difference between the thickness of the RNFL in the amblyopic eye and the fellow eye, while others have reported that the amblyopic eye has a thicker RNFL (32)(33)(34). There are no reports of RNFL thinning in amblyopic eyes, and the fact that the size of the ONH in anisometropic amblyopic eyes is smaller than in fellow eyes and normal eyes is a structural feature of anisometropic amblyopic eyes rather than optic nerve hypoplasia. In the current study, the amblyopic eye had a significantly higher MBR-A than the fellow and control eyes in the ONH group. Some past studies that compared retrobulbar blood flow, that is, ophthalmic artery and central retinal artery, in amblyopic and fellow eyes reported that retrobulbar blood flow did not differ significantly between amblyopic and fellow eyes, indicating that the blood flow supplied to the anisometric amblyopic eye and the fellow eye with different axial lengths are the same (8,9). Kobayashi et al. reported that, in normal eyes, blood flow in the ONH measured by LSFG did not differ significantly between eyes (35). Therefore, the reason for the higher MBR-A in the amblyopic eyes in the current study is that the size of the tissue component in the ONH of the amblyopic eyes was significantly smaller, while the size of the vascular component did not differ significantly among eyes. As a result, the same amount of blood flow passed through the smaller tissue component, resulting in higher blood flow velocity in the tissue component and faster overall blood flow velocity in the ONH. The vascular density of the ONH measured by optical coherence tomography angiography (OCTA) was significantly lower in amblyopic eyes than in fellow eyes, which is different from the present result that there was no significant difference in the size of the vascular component between amblyopic eyes and fellow or normal eyes (36). This discrepancy is because Sobral et al. enrolled patients with strabismic and anisometropic amblyopia, whereas we enrolled patients with anisometropic amblyopia without strabismus (36). Moreover, they included children who had been treated but had not reached 20/20, whereas we enrolled amblyopic eyes whose visual acuity had reached 20/20 or better with treatment. In addition, the difference in the analysis method between OCTA and LSFG may also have affected this discrepancy, as OCTA analyzes the superficial vascular structure, but LSFG analyzes blood flow from the superficial layer to the area around the stromal plate (37,38). Although MBR measured by LSFG was significantly correlated with peripapillary relative intensity (PRI) and circumpapillary vessel density (spVD) by OCT A, OCTA has an advantage of detecting visualization of vascular structure in each layer, while LSFG (MBR and 9 pulse wave parameters) has an advantage of assessing physiological phenomenon such as vascular resistance and auto regulation of retinal microvascular circulation and defocus (39)(40)(41)(42)(43). There was no significant difference in choroidal blood flow in this study. Hashimoto et al. reported by case report that, although the MBR was decreased in the amblyopic eye before treatment, the MBR increased with improvement in visual acuity after treatment, and the difference in MBR between both eyes became smaller (21). Some researchers have reported that the choroidal thickness of the amblyopic eye is greater than that of normal eyes with the same axial length or fellow eyes. In a study that focused on the structure of the choroid, that is, lumen and stroma, the lumen was larger, the stroma was smaller, and the ratio of lumen/stroma was larger in amblyopic eyes before treatment than in fellow and normal eyes; however, after treatment, the lumen and stroma became smaller and larger, respectively, and the ratio of lumen/stroma was the same as that in fellow and normal eyes (44). Changes in the choroidal structure that occur during treatment may affect choroidal blood flow. In this study, we analyzed choroidal blood flow in eyes with anisometropic amblyopia in which visual acuity was improved by treatment, resulting in the absence of significant differences among amblyopic, fellow, and normal eyes. The present study had some limitations. First, no comparison between fundus photographs and the ONH area using OCTA Frontiers in Public Health frontiersin.org . /fpubh. . could be performed. Instead, we calculated the area based on the sample size of the ONH. The sample size did not consider refraction and axial length, and further studies on sample size values are needed. Second, although choroidal blood flow may reflect choroidal structure, visual function, and pathophysiology of anisometropic amblyopia, in this study, choroidal blood flow was analyzed between the macula and ONH, but not in the macula, because multiple locations could not be measured due to insufficient cooperation for the examination. In the ONH, MBR was not significantly correlated with AL or SE. The ONH structure may have influenced this finding. Moreover, although past studies have investigated correlations between AL or SE and blood flow in patients with myopia and hyperopia, they have excluded amblyopia, whereas we enrolled patients with hyperopia and hyperopic anisometropic amblyopia. It was thought that these things influenced the result which was not indicate correlation. Thus, the relationship between ONH and choroidal structure, visual function, and blood flow in the ONH and macula should be investigated in the future by increasing the number of patients. Third, inter-examiner reproducibility could not be studied due to insufficient patient cooperation for the LSFG examination. In the future, inter-examiner reproducibility should be examined. Fourth, in this study, although we measured blood flow in pediatric patients, we need to investigate the blood flow not only pediatric patients but also adult patients with anisometropic amblyopia for improving the reliability of this study. Fifth, the number of subjects in this study was small. Because there was no report of evaluating in blood flow using LSFG in the anisometropic amblyopic eye excluding case report, we calculated sample size using previous report investigating ONH size among amblyopic, fellow and control eye. The size of ONH in each group was 2.57, 1.74, 1.55 mm, respectively. We calculated the sample size based on this previous study, and at least 28 eyes were required in total for this study design (α = 0.05, power 80%), indicating that the 49 eyes enrolled in this study constitute a reasonable sample size, but the sample size seems small when considered as a study of blood flow. Because we conducted this study retrospectively, in further study we prospectively need to investigate blood flow in enough number of patients with anisometropic eye (29). . Conclusion In conclusion, we were able to measure ocular blood flow in pediatric patients, and our results suggest that good reproducibility was achieved for clinical use. Moreover, the MBR values of the ONH in amblyopic eyes were high, and ocular structural differences were observed. It has been suggested that measuring ONH blood flow using LSFG could detect ocular structural changes in anisometropic amblyopic eyes. Data availability statement The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author. Ethics statement The studies involving human participants were reviewed and approved by Institutional Review Board of Toho University Omori Medical Center (#M22161). Written informed consent from the participants' legal guardian/next of kin was not required to participate in this study in accordance with the national legislation and the institutional requirements. Author contributions TI and TM: design of the study. TI and MK: collection of data, management, analysis, and interpretation of the data. TI, TM, SM, and YH: preparation and review of the manuscript. All authors contributed to the article and approved the submitted version.
2023-03-01T16:20:54.507Z
2023-02-27T00:00:00.000
{ "year": 2023, "sha1": "8ece63ff4a8ce5077889c5168135c6132ef332e2", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpubh.2023.1093686/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7488731111538ef2485b96d88a42908364943f4f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
237327942
pes2o/s2orc
v3-fos-license
Shannon Entropy Loss in Mixed-Radix Conversions This paper models a translation for base-2 pseudorandom number generators (PRNGs) to mixed-radix uses such as card shuffling. In particular, we explore a shuffler algorithm that relies on a sequence of uniformly distributed random inputs from a mixed-radix domain to implement a Fisher–Yates shuffle that calls for inputs from a base-2 PRNG. Entropy is lost through this mixed-radix conversion, which is assumed to be surjective mapping from a relatively large domain of size 2J to a set of arbitrary size n. Previous research evaluated the Shannon entropy loss of a similar mapping process, but this previous bound ignored the mixed-radix component of the original formulation, focusing only on a fixed n value. In this paper, we calculate a more precise formula that takes into account a variable target domain radix, n, and further derives a tighter bound on the Shannon entropy loss of the surjective map, while demonstrating monotonicity in a decrease in entropy loss based on increased size J of the source domain 2J. Lastly, this formulation is used to specify the optimal parameters to simulate a card-shuffling algorithm with different test PRNGs, validating a concrete use case with quantifiable deviations from maximal entropy, making it suitable to low-power implementation in a casino. Introduction The residue number system (RNS), initially proposed in 1959, was derived from the third century Chinese remainder theorem [1]. RNS architectures are now applied in many growing fields such as cryptography [2], image-processing systems [3], and error-correction codes [4] due to its convenience in parallel computing. Parallel processing with RNS often involves replacing a typical base-2 system with a different number representation system built upon two or more coprime number bases, which we refer to as mixed-radix (MR) [5]. Research in MR calculations in RNS implementations has increased due to applications of circuits in which radix choice affects their speed, power, and area [6]. In some implementations, such as those using low-power devices that require some level of security, binary-number representation may result in poor implementation or may not be applicable at all, requiring the consideration of another radix. For example, this distinction is apparent in Reed-Muller expansions over Galois fields involving cryptographic circuits [7]. In this application, using a lower-order radix usually requires less computation, which decreases the circuit's area, but this technique allows for increased power consumption due to the large amount of interconnections. On the other hand, choosing a higher radix decreases power consumption, but increases the circuit area [8]. In the case that an optimal radix cannot be found, it is common to convert pseudorandomly generated words from one number base to another [9]. It is important to examine efficient ways to use MR techniques in processes such as Reed-Muller expansions. Researchers thus worked on independent MR conversion algorithms and techniques that have short conversion times [6,10]. MR techniques were also used for fast Fourier transform (FFT) pruning, which was designed to improve computational efficiency [11,12]. These algorithms have conventional applications, such as recording the flicker of voltage in smart homes [13]. Similar to the general circuit application, the choice of radix is important, and some radices may be better disposed in different scenarios depending on operation conditions. For example, higher radices reduce latency for memory-shared FFT architectures [14]. This additional use of MR conversions has prompted additional research on MR [12,15,16]. Most digital logic and computing systems are base-2, but algorithms such as Cooley-Tukey [17] offer equivalent mixed-radix representations to achieve the same overall calculation, but as a parallel composition of many small RNS operations. In particular, most PRNG methods were designed to produce bistate binary values, and methods to quantify PRNG output randomness usually require their binary form [18]. Consequently, the most common output of a PRNG is a k-bit binary word that may be viewed as an element r ∈ Z 2 k . Examples include linear feedback shift registers (LFSR) [19], the Mersenne Twister [20], and thermal entropy-based true random-number generators (TRNG) [21]. As such, the primary source domain for RNG values is typically on Z 2 k , while other algorithms consuming random words may wish for nonbinary random words. The specific example of optimally shuffling a deck of cards is discussed later on in this paper. To support this evaluation, we focus on quantifying a specific MR application: calculating the Shannon entropy loss of an onto map from a source domain of size 2 J to a target domain of arbitrary size n. Ref. [22] developed a lower bound on the Shannon entropy loss from mapping between an RNS-based source domain onto a Z 2 k target domain (a fixed n value), but this bound involves approximations that are only applicable for extraordinarily large source domains. In this paper, we calculate a more precise formulation that reveals the exact change in entropy of the onto map; our truncation of a Taylor series approximation offers a useable closed-form approximation that may be expanded by the reader. These calculations are referenced later to choose the correct parameter to reduce computations in our chosen application of card shuffling. On the basis of these calculations, we apply MR conversions in a casino shuffling application with a base-2 PRNG component. As the casino industry is developing, casinos are relying on robotic card dealers to reduce the cost of hiring human card dealers and decrease the time that it takes to shuffle cards. Card-shuffling machines that randomize one deck while another is in use remove lost time and the possibility for fraudulent shuffles by a human dealer. These machines, which usually utilize a simple riffle shuffle, are rented for USD 500 per machine per month [23]. Additionally, a casino might benefit from the determinism of PRNG values to more precisely predict payouts; the same determinism becomes a potential avenue for dynamically skewing player odds [24], though the use of physics-based true RNGs can mitigate that risk. Many prior works explored testing the quality of a card shuffle. More generally, card randomization is a popular problem in many fields such as statistics, combinatorics, and communications [25]. Markov proved results that analyzed card shuffles as early as 1906 using finite Markov chains [26]. More rigorous methods were also introduced, such as those using Fourier analysis and quantifying the entropy loss of riffle shuffling [27]. More recently, statistical tests for randomness, such as the FSU DIEHARD suite [28], were used to quantify the randomness of a permutation [29]. Additionally, entropy formulations, such as fiber entropy, were utilized to measure shuffle output randomness [30]. In our application, we determined the quality of the shuffle on the basis of Shannon entropy loss of the mapping process within the shuffle and simulating the shuffle algorithm using MATLAB with different PRNGs. PRNGs and lightweight cryptographic primitives are useful in many applications. Though the option of using a TRNG in place of a lightweight primitive always provides a result with the highest entropy, utilizing PRNGs in these applications may be feasible and cost-efficient. In this paper, we demonstrate the feasibility of implementing a real-time, low-power card-shuffling algorithm with negligible entropy loss. The core algorithm is described in Section 2, followed by a complete derivation of Shannon entropy loss in Section 3. A simulation model is then presented in Section 4, followed by overall conclusions in Section 4. Materials and Methods To begin, we introduce a card-shuffling algorithm characterized by modulo arithmetic on RNG outputs and recurrent Fisher-Yates permutations [31]. The system, illustrated in Figure 1, shuffles a standard card deck as an ordered array C = [1|2| . . . |52] by repeated Fisher-Yates-based shuffles that utilize random numbers derived from any base-2 RNG. Let k denote the width of the RNG output. Each clock cycle, k bits are produced by the RNG and are sent to the data-framing stage, which involves concatenating the bits from α PRNG outputs to create a word X of length J = α · k. The product of this stage is a word X that has enough bits to perform future modulo operations in the algorithm. For example, an 8-bit RNS with J = 64 is framed collecting eight successive outputs and concatenating them together. Figure 2 demonstrates how X is framed in this step. Once X is created, the J-bit string is mapped from a source domain of Z 2 J to a dynamic target domain, n (starting at n = 52), effectively producing a near-ideal uniformly distributed random value on Z n . These modulo residuals are used in a Fisher-Yates permutation on C for each of the 51 iterations of the modulo calculation. Once the last Fisher-Yates shuffle is completed, deck C is fully shuffled. One RNG may be better suited for this application than others are. in this case, "better" refers to both how closely the chosen RNG approaches maximal entropy, whether successive samples offer an opportunity to reverse-engineer, and the associated costs (money or computational complexity). As such, a TRNG would not contribute to distortions in output randomness, while utilizing a low-complexity PRNG may result in a noticeably nonrandom shuffle. However, the consumer of this algorithm may prefer the cost effectiveness of low-power PRNG instead of TRNG. This paper attempts to answer the underlying question of how to optimize the needs of both randomness and cost effectiveness. Mixed-Radix Conversion Entropy The shuffler's deviation from maximal entropy occurs both through the PRNG algorithms that produce random input numbers and the onto mapping process of the mixed-radix conversion. In this section, we provide a background on Shannon entropy, the metric we used to measure entropy loss, and we explain how to calculate the Shannon entropy loss of a mixed-radix conversion from Z 2 k → Z n . The Shannon definition of entropy quantifies the memoryless predictability in an event. In particular, the Shannon entropy of a discrete alphabet of possible events of size L is defined by We utilize Equation (1) to measure the entropy of modulo reductions A (mod B), which can be represented by a surjective map A → B, where |A| = A and |B| = B. The mapping process can be visualized as placing the values 1, . . . , A into bins that represent their residual modulo B, creating a histogram of values as illustrated in Figure 3. Splitting up the histogram on the basis of column height, we could calculate the Shannon entropy of onto map A → B. From this splitting, we tailor the Shannon entropy formula in Equation (1) to obtain Further, define entropy loss for onto map A → B as shown in Equation (3). The small entropy loss of a radix conversion indicates that more randomness is retained; likewise, this entropy-loss metric must always be positive and produce a value 0 ≤ γ A→B ≤ 1. We see the application of Equation (3) later in this paper, where entropy is calculated for mappings in which A = 2 J and B ranges from 2 to 52. Calculating these values is achieved by applying an adaptation of Equation (2). Entropy Sources Entropy is lost through the shuffler's onto mapping process (discussed in the beginning of Section 2), and randomness is also lost through the chosen input RNG. Even though utilizing a TRNG in lieu of a PRNG or lightweight primitive provides the highest level of entropy, utilizing a PRNG in this application may be a feasible and cost-efficient choice. The concept of this experiment was, therefore, to build a card shuffler that is low-cost and low-power. We implemented the shuffling algorithm with different PRNGs and a TRNG to test their feasibility to create a random shuffle. These algorithms are listed and briefly described below. • LFSR: The linear feedback-shift register (LFSR) is a polynomial-based code on a binary space in which the output sequence is repeatedly replaced by an exclusive-or (XOR) of its last state. Since LFSR produces linear operations on a finite set of states, it creates a cyclic pattern. LFSR can produce a long cycle, or period, if the polynomial and input sequence is chosen correctly. Combined with its speed, this makes LFSR a reasonable choice for applications that require pseudorandom number generation. We used two LFSRs, LFSR-16 and LFSR-24, corresponding to polynomials z 16 + z 15 + z 13 + z 4 + 1 and z 24 + z 23 + z 22 + z 17 + 1, respectively. • Combined Multiple Recursive PRNG: The combined multiple recursive PRNG combines multiple linear congruential generators, which are fast and low-cost PRNGs that use recurrence relations to obtain the next state. Linear congruential generator quality depends on choice of parameters, but their speed and low cost justified their use in cryptographic and statistical applications [32]. We used the MATLAB mrg32k3a variant, which has a period of 2 191 [33]. • Multiplicative Lagged Fibonacci: a type of lagged Fibonacci generator (LFG) that was created in the 1950s in hopes of improving the linear congruential generator. These PRNGs utilize a recurrence formula on the basis of the Fibonacci sequence. A multiplicative LFG (MLFG) uses multiplication as the operation in question [34]. We used MATLAB's mlfg6331_64, whose period is 2 124 [33]. • Mersenne Twister: the default PRNG in simulation engines such as MATLAB and Python, the Mersenne Twister offers efficient speed and a large repetition period [20]. We used MATLAB's mt19937ar, which has a period of 2 19937 − 1, a Mersenne prime [35]. • Multiplicative congruential generator (MCG): Introduced in 1958, MCG is a specialized version of the linear congruential generator in which a parameter is set to zero [36]. Output quality passed randomness tests, but one must be careful in choosing some parameters [37]. We used mcg16807, which has a period of 2 31 − 2 [33]. • Modified subtract with borrow generator (MSBG): MSBG was proposed in 1991 as an improvement to the lagged-Fibonacci PRNG [38], and later applied in the RANLUX generator created for particle-physics simulations [39]. We used MATLAB version swb2712, which has a period length of 2 1492 [33]. After simulating the card shuffle using each of these RNGs, we analyzed the randomness of their outputs. We then discuss how RNG quality affects the resulting shuffle output. The goal is to uncover that, though a very low-power PRNG such as the LFSR may produce noticeable patterns within the output card shuffle, another lightweight primitive may be just as functional in producing a good shuffle as a high-cost TRNG. In order to explore these topics, we need to find an optimal testing value for the second parameter, J. This process is discussed in the following subsection. Calculating an Optimal J In this section, we find the value of J that should remain constant to test the feasibility and efficiency of the RNGs. The value of J should be large enough so that the algorithm's output is shuffled to acceptable standards, but also small enough to minimize the amount of arithmetic performed in the modulo calculation. To determine a metric of shuffle quality, an engineer may write code and actually prove feasibility on a small device, while a mathematician could show some guarantee about the probabilities on the basis of the shuffles that we see. We employed a hybrid of these two perspectives, where we considered both the numerical Shannon entropy loss from the shuffle and the utilized hardware to implement the shuffle. The following subsections explore both of these approaches. Trend of γ A→ B When A = 2 J We visualized the entropy loss of the shuffler's onto mapping process to notice any pattern in the behavior of γ A→B with source and domain sizes that are relevant in this chapter. Our general frame of reference remains with an assumption that 2 J n, We created a simulation that tabulates the entropy loss for the onto map A → B, where |A| = 2 J and |B| = n as n ranges from 2 to 52. The result was a surface plot histogram, shown in Figure First, the n values that are a power of 2 (n ∈ {4, 8, 16, 32, 64}) have entropy-loss values of 0. This is because 2 raised to any power J ≥ 6 modulo n equals 0; n automatically wraps around on an even basis, thereby creating zero residual. Visually, this plot appears to be monotonically decreasing with increasing J, yet the implications of that monotonicity represent a significant simplification in system design (i.e., use the highest J value possible) if true. Under the core assumption of 2 J n, we demonstrate this monotonicity, which is ultimately useful in our choice of J, since this statement justifies that choosing a larger value of J always results in lower entropy loss. The derived approximation quantifies entropy loss, γ 2 J →n and γ 2 J+1 →n of a surjective map A → B, in order to estimate the deviation from maximal entropy. The monotonicity conclusion falls apart if 2 J is not much greater than n, but it is utilized later in this chapter to determine a suitable value of J in a casino-game implementation depending on processor size. The simplifications of H J and H J+1 are also helpful in that their algebraic form confirms a maximal entropy of 1 for H J . Approximation of Entropy and Entropy Loss for Z 2 J → Z n Claim: Entropy loss as a function of J for our chosen ranges of (J, n) is a monotonically decreasing function when 2 J n. Additionally, the entropy, H J , of a surjective map Z 2 J → Z n under the condition of 2 J n is where x = 2 J (mod n). We also show H J+1 − H J ≥ 0 for all n, when 2 J >> n. Proof. Denote H J the entropy of the onto map A → B, where A = 2 J and B = n. Let x = 2 J mod n, where 2 J n. Using residual count x, the ceiling and floor values that occur in entropy value H are For values including 2 J+1 in the proof, we tabulate the following values in terms of x. Then, Using a degree-2 Taylor series approximation for the logarithms, The higher-order terms in the infinite summations of Equation (12) should be recognized as an alternating series that rapidly decays given our prior assumption of 2 J n. Via the alternating series remainder theorem [40], both residual summations are bounded by their third positive terms. With the extraction of these terms into a residual R(J, n) that incorporates the leading scalar coefficients, we may restate Equation (12) as In stating the little o() notation, we recognize that the scalar coefficients of each term are approximately 1 3 each; thus, any additive combination between them is less than 1; likewise, x ≤ n, giving us an overall bound on residual term R(J, n). Further, given the magnitude of n in comparison to the magnitude of 2 J for our application, and the accompanying assumption that 2 J n, this residual term R(J, n) decays exceedingly quickly. If the assumption of 2 J n is invalid, either additional terms must be retained in the Taylor series expansion and/or the chosen approach is invalid for drawing a conclusion. In the extreme case where 2 J n, the onto map contains very few domain elements per row ( A B described earlier); thus, the probabilities considered in the Shannon entropy estimation diverge from uniform distribution. This entropy value simplifies to Equation (15) is a key simplification of H J . This form allows for us to simplify values in subtraction H J − H J+1 . It remains to prove that H J is a decreasing function of J, for which we temporarily ignore the algebra of the bounded residual term. We broke that evaluation into two independent cases on the basis of the piecewise nature of Equations (7)-(9) for incremented case J + 1. Case 1 for H J+1 : x ≤ n 2 . We used similar methods as those that we used in deriving Equation (15) to determine With further simplification and subtraction, we have that . Given the prior assumption that 2 J n, (**) is arbitrarily small with practical bound on the order of n 3 (2 J ) 3 . This terms is also on par with the previously calculated R(J, n) residual. Consequently, we work with (*) to evaluate whether the difference of entropies is greater than 0. If we define x = n 2 + , letting be the residual between n 2 and x, 0 ≤ ≤ n 2 . Then, So, Since ≤ n 2 , ( * ) ≥ 0. Thus, H J is an increasing function as J increases. Consequently, entropy loss 1 − H J is a decreasing function in this case. The second case follows likewise, but we show the formulas for further clarification. Case 2 for H J+1 : x > n 2 . The equivalent for Equation (16) is We use simplification and subtraction similarly to Case 1 to reveal As in the first case, (****) is substantially smaller than (***) on the order of n 3 (2 J ) 3 , which is comparable to the previously discussed R(J, n), so we worked with (***). We applied substitution x = n 2 − and further simplifications as before. is still the residual between n 2 and x, 0 ≤ ≤ n 2 . We arrive at the analog of Equation (19), Since ≤ n 2 , ( * * * ) ≥ 0, and the discarded residual terms are sufficiently smaller than either difference in Equation (19) or (22), we conclude the monotonicity under the initial assumption of 2 J n. We end this subsection by emphasizing the importance of Equation (4). We simplified the entropy approximation into a closed form that approaches the value of 1. This value is relevant in that it represents the maximal entropy of H J , and thus 0 ≤ H J ≤ 1. Applications and Relevance Our reduction in Shannon entropy is both applicable for the card-shuffling application in this paper, and to measure entropy loss in other nonbinary applications. Binary modifications, such as gray code and the complex-binary-number system, are applied in puzzles [41], positioning technology [42], genetic algorithms [43], and granting faster processing for problems that handle complex numbers [44,45]. There are also many physical processes that are not necessarily optimized with radix 2. Research was conducted to examine the advantages of adopting the octal number system to represent SI units, money, time, and calendar days for computer accessibility [46]. Other systems include the decimal-number system, which is often utilized when high precision is necessary [47], and the alphanumeric number system, which is commonly utilized in storing colors using the RGB color model [48]. Some processes require a nonbinary source of randomness. PRNGs are designed to utilize nonbinary operations, such as the nonbinary Galois linear feedback shift register (LFSR), in which the exclusive-or performs addition modulo-q instead of modulo-2 [49]. In fact, some processes require a mix of different radices or mixed-radix as a source of randomness for PRNGs. This is common in instances in which there is interest in a uniform value on a specific domain size. In these cases, there is either the drawback of entropy loss that is not truly uniform from relying on a small PRNG, or the disadvantage of extra processing while utilizing a larger PRNG. One such example is mapping from one RNS space to a second coprime RNS space. Employing mixed-radix and nonbinary-number systems is useful in depicting metrics. For example, representing time in terms of years, days, hours, minutes, and seconds requires a number system that takes into account the cycle length of each unit. Currency is another example, in which we denote money in terms of dollars, quarters, nickels, and dimes [50], yet commensurate analysis would fail in fanciful analysis using prime-based currency (galleons, sickles, knuts). Generating a random number of seconds within a minute would not provide uniformity if it began with a binary PRNG; this may be performed with a conscious understanding of entropy loss given a J-sized processor. Other applications of mixed-radix and nonbinary-number systems exist in games and combinatorial problems where the number base fluctuates throughout the processes. Though it may be simple to analyze the entropy of these games for one radix value, it is a much harder problem to understand entropy loss when the radix constantly changes, which is common in games. For example, researchers utilize the combinatorial number system to analyze lottery games, in which strictly decreasing combinations of numbers are advantageous to winning [51]. Another example occurs in video games, in which mixed-radix indexing processes are used to denote the position of different players [52]. Mixed radix is also necessary for combinatorial problems and simulations. For example, the factorial number system is a mixed-radix system that is utilized to analyze combinatorial problems in which permutations are represented as numbers [53]. It is important to convert formulae into input different radix values in these scenarios, so that others can understand the entropy loss of these games and problems. This paper highlights entropy loss in a cardshuffling system that can be applied to casino games: the index representing deck-position changes during every iteration of the system, so it is important to be able to calculate the entropy loss of the system with different radices. Lastly, nonbinary and mixed-radix number systems are utilized in many scenarios that encompass even simple applications, such as representing metrics such as time and length. Utilizing nonbinary PRNG can help provide uniformity in generating random values for these applications. Utilizing RNS itself is also advantageous in parallel computing and fast arithmetic, which has applications in encryption, nanoelectronics, digital-image processing, and embedded computing. Our claim of Section 3.1 may be used to calculate the entropy loss of many of these mixed-radix and nonbinary applications, especially in games and combinatorial problems, because it is common for the number base to fluctuate throughout these processes. Equation (15), which quantifies the entropy loss of a surjective map A → B, where |A| = 2 J and |B| = n for an arbitrary value of J, is important in that it takes into account radix n. Thus, Shannon entropy loss can be calculated for multiple radix values in applications that utilize this mapping process. Secondary Impact In this section, we discuss parameters that influence our choice of J in testing PRNGs for the casino application. First, we must consider the impact of keeping J constant as n varies, or mixing J values, such that we optimally select J, J(n), within an allowable range to minimize entropy loss for each n. By the approximation of Section 3.1, entropy loss is monotonically decreasing in terms of J. We want J to be as large as possible, and because of the monotonic nature of entropy loss, there is no apparent benefit in mixing J values. Therefore, we base our choice of J on the maximum that the processor can handle, regardless of current index value n. Next, we consider hardware in choosing which J value to use in the simulations. In other words, we configure the choice of J to use available resources. Due to currently available standard processor sizes, the choices for J are restricted to J = {8, 16, 32, 64, 128}. This is justifiable due to the monotonicity of entropy of γ A→B when A = 2 J . For example, if J = 30, increasing this value to J = 32 does not add any additional cost to the solution and would still result in lower entropy loss. The fundamental hardware consideration in choosing the optimal J value depends on the processor size. The amount of arithmetic in the modulo process must not be too excessive for the m-bit choice of processor. The primary computation that would cause a large amount of arithmetic in the shuffle is computing a large J-bit value modulo n, where n ≤ 52, on a m-bit processor when m < J. In the following paragraphs, we show that exceeding the processor constraints is not a realistic possibility for the shuffler, since many processors decompose large operations using bit-slice processes on a base-2 m number system [54,55]. In the bit-slice process, the J-bit word X is divided into J 2 m smaller m-bit subwords X J 2 m −1 , . . . , X 1 , X 0 . In our adaptation of a bit-slice processor [56], these subwords are used to compute X (mod n) by Equation (23): Each power of 2 m·i modulo n can be precalculated and stored in memory to reduce the number of additional calculations. Figure 5 shows how an m-bit processor calculates Equation (23). Since the J-bit word X is broken down into m-bit fractional elements, a small processor can compute each subword mod n to create a 6-bit or less output for each subword. Thus, even if J = 128, which far exceeds entropy-loss expectations, a 32-bit processor suffices since it can decompose the word into four subwords. Important in this modular reduction is the recognition that subwords have virtually no pairwise calculations, making them efficient on virtually any reduced precision process. Combining these secondary-impact types, we utilized values J equal to or larger than processor size m in our application for m ≥ 32. If a smaller processor size was utilized, we used J = 32 or higher. Table 1 displays the minimal value of J that is recommended for each common processor size. Prototype Shuffling Algorithm We used MATLAB to simulate the shuffling algorithm using the PRNGs listed in Section 2.2. The code that we utilized is outlined in Algorithm 1. Inputs to this algorithm include k and α as described in Section 2. The final input is array, which is an ordered set that can be mapped from a deck of 52 cards. This deck does not have to be in the order of a standard deck, yet we assumed a fresh deck between iterations in our experiment. Block 1 creates a look-up table of precalculated values 2 i mod n for i = 1 : α and n = 1 : 52. These calculations utilize the bit-slice process described in Figure 5. Block 2 then calculates the random numbers by scaling PRNG value r with its corresponding value from M. Lastly, Block 3 implements the Fisher-Yates shuffle on array utilizing the random numbers from X. The output is a shuffled array that represents the final shuffled deck. After simulating the shuffling algorithm with different PRNGs, we used a poker-hand ranker to evaluate the output shuffle quality [57]. Although not an explicit measure of security or goodness, this practical application of the shuffling algorithm enables testing the results beyond esoteric entropy numbers. The numbers of poker hands expected and obtained in a 10 million hand run, each consisting of an independent shuffle and dealing of the first 5 cards, are displayed in Figure 6. The calculated expected frequencies show that, even when the algorithm relies on moderate PRNGs, the output values are still randomized; weak PRNGs such as the LFSRs display an observable deviation from combinatorics-based expected probabilities. A TRNG may be used in lieu of any of these methods, providing greater assurance of maximal entropy. Discussion Due to the popularity of mechanical card shufflers in casinos, there is interest in creating real-time shuffling implementations utilizing lightweight primitives like PRNGs. We introduced a card-shuffling algorithm that, on the basis of RNG choice, can be lowpower and cost-effective. We designed an experiment to test PRNGs of differing quality to determine how to optimize cost effectiveness and output shuffle quality. Entropy is lost from this algorithm in two ways: through the onto-mapping process in modulo operations and through the selected PRNG. As an extension to the bound created by [22] for entropy loss resulting from the onto-mapping process, we calculated a more precise formula that describes the exact entropy loss. In particular, we quantified the Shannon entropy of surjective mappings A → B where |A| = A and |B| = B. After computing a generic formula for arbitrary domain sizes, we refined the formula for A = 2 J . We utilized these formulas to prove the monotonicity of the onto map for A = 2 J and reasonable bounds on n. We created deterministic formulas and proven properties of the Shannon entropy of the onto map from 2 J → n. The motivation behind this proof was for choosing a suitable testing value of J in order to the functionality of different PRNGs in a casino shuffling algorithm. We set up and proved the optimal parameters for testing to minimize entropy loss from the random-number-generation process in the algorithm. Then, we compared different PRNGs in the RNG process and examined the effect of using a low-power PRNG on the entropy of the combined operation by examining the frequencies of poker hands out of 10 million runs for each PRNG, listed in Figure 6. The resulting frequencies demonstrated that even low-power PRNGs are able to produce a suitable amount of randomness for a casino shuffling application, though we also showed that not all PRNGs make the cut. Future work is building a hardware prototype of this shuffling engine, including the incorporation of a TRNG. Applying poker-hand analysis with TRNG is expected to provide long-term probability values for comparison that can truly highlight the differences in utilizing PRNG in lieu of TRNG for this application. Moreover, the prototype is suitable for immediate incorporation in electronic games (e.g., video poker and slots), which may require different number bases and adaptation to mechanical shufflers.
2021-08-28T06:17:18.508Z
2021-07-27T00:00:00.000
{ "year": 2021, "sha1": "90d942a0000ef68795f48f1b005ec79196327599", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1099-4300/23/8/967/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f305d59936784009263f1e5624055b3ecd0f1e6e", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
36439910
pes2o/s2orc
v3-fos-license
Bilateral same-session ureterorenoscopy: A feasible approach to treat pan-urinary stone disease Objectives To assess treatment effectiveness and safety of bilateral same-session ureterorenoscopy (BSSU) for the management of stone disease involving the entire urinary system. Patients and methods We reviewed the records of 64 patients who underwent BSSU for the treatment of bilateral ureteric and/or kidney stones. Size, number, location per side, and the total burden of stones were recorded. Data on stenting, lithotripsy, and stone retrieval, and details of hospital stay and operation times were investigated. Treatment results were assessed using intraoperative findings and postoperative imaging. The outcome was considered successful in patients who were completely stone-free or who had only residual fragments of ≤2 mm. Results The outcome was successful in 82.8% of the patients who received BSSU (54.7% stone-free and 28.1% insignificant residual fragments). The success rate per renal unit was 89.8%. There were no adverse events in 73.4% of the patients. The most common intraoperative complication was mucosal injury (36%). The complications were Clavien–Dindo Grade I in 9.4% and Grade II in 7.8%. Grade IIIa and IIIb (9.4%) complications required re-treatments. Statistical evaluation showed no association between complication grades and stone, patient, or operation features. Stone burden had no negative impact on BSSU results. The presence of impacted proximal ureteric stones was significantly related to unsuccessful outcomes. Conclusion BSSU is safe and effective for the management of bilateral urolithiasis. BSSU can prevent recurrent surgeries, reduce overall hospital stay, and achieve a stone-free status and complication rates that are comparable to those of unilateral or staged bilateral procedures. Introduction The treatment of bilateral urolithiasis has traditionally been staged procedures due to concerns of the possible simultaneous traumatisation of both sides of the urinary system. Currently, in cases of bilateral ureteric stone impaction, semi-rigid ureteroscopy is often attempted bilaterally in a single stage [1][2][3]. Flexible ureterorenoscopy is usually carried out for ipsilateral nephrolithiasis whilst treating ureterolithiasis [4][5][6]. Reports on the efficiency of a single-stage, bilateral ureteroscopic treatment of stones in the entire urinary system are still scarce. Owing to the improvement of endoscopic technology and skills, the treatment of all stones in the entire urinary tract has become an attainable goal in a single operative session. Bilateral same-session ureterorenoscopy (BSSU) has been proposed to reduce overall operative times and anaesthetic requirements, which are factors associated with increased morbidity [1][2][3][5][6][7][8][9]. In our practice, patients recommended to undergo ureteroscopic stone treatment have been counselled on the option of undergoing BSSU for all stones of clinically significant size. This approach may be warranted urgently or as elective management of bilateral symptomatic or asymptomatic stones. As 32-58% of asymptomatic stones of significant size cause symptoms or require intervention within several years, we have aimed to clear all accessible stones in the urinary tract in a single operative session [10][11]. In the present study, we analysed our experience with BSSU used for the treatment of stone disease involving the whole urinary system. We investigated the clinical operative data and perioperative course of this approach to determine its effectiveness and safety. Patients and methods From 2010 through 2016, 64 adult patients underwent BSSU. The indications for the procedure were bilateral nephrolithiasis, bilateral ureteric obstruction, and unilateral ureteric obstruction with ipsilateral/contralateral kidney stones. Patients with pan-urinary stones were deemed suitable for BSSU depending on clinical judgements of safety, indication, patient preference, and the failure of previous treatments. Pre-and postoperative evaluation Surgical planning involved imaging with unenhanced CT, ultrasonography, and/or a kidney-ureter-bladder radiograph (KUB). Stone size was measured as the greatest dimension in millimetres, and stone burden represented the sum of all the maximum sizes of stones at the given location. Patients were preoperatively tested and treated to ensure sterile urine. Informed consent of patients suitable for BSSU was taken after a comprehensive discussion of the procedure. Treatment success was intraoperatively assessed by endoscopy and postoperatively assessed by radiological imaging. KUB/ultrasonography was done at 4 weeks of a patient's operation. The outcome was considered successful in patients who were totally stone free or who had only residual fragments of 2 mm too small for retrieval. Complications were assessed according to the modified Clavien-Dindo grading system. All abrasions, thermal injuries, and submucosal false passages were defined as mucosal injuries. Perforation encompassed injuries caused by misguided wires, laser fibres, or accessory instruments through the ureteric wall; the propulsion of stone fragments through the ureter/collecting system; or defects of any size on the ureteric wall created by dilatation or passage of a ureteroscope or access sheath. Intrarenal urothelial tears, incisions, and punctures were also described as perforations. The operation (OR) time denoted the whole interval of anaesthesia, which began with induction and consisted of the positioning and preparation periods and the duration of the endoscopic operation, and which ended with the extubation of the patient. Techniques and instruments BSSU was carried out under general anaesthesia, and all patients received i.v. cephalosporin or aminoglycoside for prophylaxis. Semi-rigid 8-9 F, Digital or Flex X2 7-F (StorzÒ) flexible ureteroscopes were used by three experienced endourologists. Several instruments (baskets, graspers, etc.) were used for stone extraction and/ or positioning depending on the surgeon's choice. Ureteric access sheath (11/13 F or 12/14 F) was not the standard technique in BSSU cases and used upon the discretion of the surgeon depending on the burden and location of kidney stones. Intracorporeal lithotripsy was performed by holmium laser, pneumatic and electrohydraulic fragmentation. Ureteric access sheaths were used depending on the surgeon's discretion and were used particularly if the stone burden was large or a prolonged procedure was anticipated. The BSSU procedure was started from either the obstructed side or the side with the greater stone burden. Institutional Review Board permission to extract and review our prospectively maintained electronic database was obtained. Statistical analysis Statistical analyses were performed using PASS 2008 and NCSS 2007 Statistical SoftwareÒ (Utah, USA). Continuous variables were compared using the nonparametric Mann-Whitney U-test. Categorical variables were compared using Pearson's chi-squared test, the Fi sher-Freeman-Halton test, Yates' continuity correction, and Fisher's exact test. Spearman's correlation analysis was conducted to measure the degree of association between variables. A P < 0.05 was considered statistically significant. Results In total, 64 adult patients (21 females and 43 males) aged between 27 and 80 years (median 47 years) with bilateral ureter and/or kidney stones underwent BSSU. Stone characteristics The stone characteristics are presented in Table 1. In all, 46 patients (71.9%) harboured multiple stones (range 3-10) in their entire urinary system, whilst 28.1% had single stones on both sides. The average stone count was 4.25 per patient. There were both renal and ureteric stones in various anatomical locations in 35 patients (54.7%). In all, 10 patients (15.6%) had only renal stones and 19 patients (29.7%) had only ureteric stones bilaterally. There were both renal and ureteric stones in various locations in 35 patients (54.7%). The mean (range) stone burden was 29.87 (11-82) mm per patient. Operation data A flexible ureteroscope was exclusively or additionally required in 75% of the patients. Both semi-rigid and flexible instruments were used in about half of the patients (51.6%; Table 2). The median OR time was 107.5 min. Four patients with concomitant urological procedures (three photoselective laser vaporisation of the prostate and one fulguration of multiple bladder lesions lasting 255, 160, 130, and 100 min, respectively) were excluded from the OR time analysis. One patient with proximal ureteric stenosis required a lengthy endoscopic procedure to access the stone, and this procedure lasted the longest (240 min) out of the cases included in the study group. The statistical analysis revealed a significant positive correlation between stone burden and OR time (P < 0.05; Table 3). Intracorporeal lithotripsy was not needed in 7.8% of the patients. A laser lithotripter was not amongst the operative armamentarium in seven (11%) patients. Stone retrieval equipment was not required in 20% of the patients. Most of the patients undergoing BSSU (93.8%) did not have preoperative ureteric stents (Table 2). Postoperative stents were placed in 86% of the patients. Of the patients without postoperative stents, four were treated for bilateral kidney stones. In the remaining patients without stents, the unstented units were treated for ureteric stones only. The duration of stenting was determined on a per case basis, which ranged from 1 to 4 weeks. Complications The most common intraoperative complication was mucosal injury (36%). There were perforations in 9.4% of the patients ( Table 2). Most of these perforations were minor and involved injuries of the ureteric smooth muscle and the urothelial wall inside the collecting system due to extraction and lithotripsy. Only one procedure was abandoned due to severe (full-thickness) ureteric perforation during stone extraction, which was managed with long-term ureteric stenting. There was prolonged macroscopic haematuria in 19% of the patients. Nearly a quarter of patients reported severe pain after the procedure. Three patients (4.6%) had postoperative high-grade fever (>38.0°C). One of these patients had multiple renal stones with a total burden of 66 mm. There were impacted bilateral ureteric stones in the remaining two patients, one of which was further complicated by ureteric stenosis. The OR times of these three patients were 150, 120, and 240 min, respectively. All three patients were successfully treated with broad-spectrum antibiotics. The length of hospital stay (LOS) was 1 day for 85.9% of the patients. The causes of extended LOSs were unalleviated pain in three patients, fever in three, macroscopic haematuria in one, and unspecified patient preference in two. Re-admissions (6.3%) were due to pain or fever in three patients and oliguria in one. This oliguric renal insufficiency resulted from bilateral urinary obstruction by a stone street comprised of gravel after an uncomplicated unstented BSSU for a 9-mm proximal ureteric stone and bilateral nephrolithiasis, with a total stone burden of 29 mm and 16 mm on each side. This patient's renal function quickly normalised after re-look ureteroscopy. There were no adverse events in 47 patients (73.4%). Complications, which were defined according to the modified Clavien-Dindo classification, were Grade I in six patients (9.4%) and Grade II in five (7.8%). Therefore, 90.6% of the patients had either no or minor (Gr ade II) complications. In the remaining six patients (9.4%), the complications were Grade IIIa or IIIb. Three patients had undergone extracorporeal shockwave lithotripsy (SWL) treatment for residual or migrated stones (Grade IIIa). The Grade IIIb complications required re-operations for residual stones and obstructing fragments. A statistical evaluation did not show any association between the Clavien-Dindo complication grades and stone, patient, or operation features (P > 0.05; Table 4). Success There was a successful surgical outcome in 82.8% of the patients. After BSSU, 35 patients were completely stone free (54.7%), and 18 had only residual fragments of 2 mm (28.1%). When the surgical outcomes were re-evaluated per renal unit treated, the overall success rate was 89.8% (Table 5). Unsuccessful results comprised 11 patients with unreachable or residual stones of significant size. Two patients were treated during a period when a flexible ureteroscope was not available to pursue migrated frag- ments. Statistical analysis did not reveal any significant association between BSSU success rates and patient age, American Society of Anesthesiologists (ASA) score, OR time interval, or the use of laser lithotripsy. Although stone count and burden did not have any impact on treatment results, the presence of proximal ureteric stones was significantly related to unsuccessful outcomes (P < 0.05; Table 6). When the former and latter (in chronological order) patients in the study group were further analysed, there were treatment failures in 25% of the patients (eight of 32) compared to 9.4% (three of 32) amongst the second half of cases. Discussion The guidelines on the management of stones of various sizes and locations are methodically updated parallel to the progress of endourological expertise and clinical evidence. However, there is currently no consensus on the best practice for the management of bilateral urinary stones. The review of our present results of the use of a single BSSU procedure for the treatment of bilateral pan-urinary stone disease revealed a successful outcome rate of 82.8%. Various options are available for the treatment of patients with bilateral renal stones, including bilateral SWL, staged or synchronous percutaneous nephrolithotomy (PCNL), and PCNL combined with ureterorenoscopy. Perry et al. [12] stated that bilateral synchronous SWL is a safe and effective monotherapy for bilateral urolithiasis, with a bilateral stone-free rate (SFR) of 60% after one treatment. However, additional procedures were required in 16% of cases due to significant residual stone disease or obstruction during follow-up. Stone size and number independently increased the probability of treatment failure. For patients with large bilateral renal stones, synchronous bilateral PCNL may be offered. In a review by Williams and Hoenig [13], the overall outcomes for synchronous bilateral PCNL revealed high SFRs (95-97%), low complication rates (9-12%), short LOSs (4-6 days), and low blood transfusion rates. In a study comparing 150 simultaneous bilateral and 300 unilateral PCNLs, Holman et al. [14] concluded that similar complication rates (14.3% vs 11.3%, respectively) showed that the single-session bilateral PCNL is no more hazardous than separate PCNLs for bilateral kidney stones, alongside the clear advantages of single anaesthesia, less medication, shorter LOS and convalescence, considerable cost-effectiveness, and reduced loss of working days. Silverstein et al. [15] also commented on similar benefits, together with total blood loss and total OR time, making synchronous bilateral PCNL an attractive option for select patients with large renal stone burdens. However, in patients with multiple difficult to access renal stones and particularly patients with renal stones that are accompanied by ureteric stones, the effective clearance of stones may not be accomplished by synchronous bilateral PCNL or SWL. Considerable data have accumulated to advise BSSU as a treatment for bilaterally obstructing ureterolithiasis. A recent meta-analysis of 11 studies (431 patients), which assessed the treatment of ureteric calculi, revealed an overall SFR of 82% (varying from 52% to 90%) for BSSU [16]. The overall complication rate of BSSU remained at 17%. Amongst these, the incidences of pain, postoperative fever, and gross haematuria were 20%, 4% and 4%, respectively. Other complications including urosepsis, urinary infection, mucosal laceration, stone migration, and ureteric perforation accounted for 6% of the total complications. Contemporary studies of the use of BSSU to treat multiple stones at different locations in the urinary tract are limited, with existing studies reporting on a total of <250 patients [2][3][4][5][17][18][19][20][21]. The heterogeneity of stone characteristics makes comparing the results of published series difficult. In these studies, the SFRs have ranged widely between 52% and 92.8%, and these rates are inversely associated with a mean stone burden of >20 mm, a higher proportion of impacted proximal ureteric stones, and lower pole renal stones [2,3,5,9]. The varied outcomes of the current BSSU studies may be attributable to patient diversity and inconsistent methodologies. Our present success rate per case is augmented to 89.8% when reassessed on a per renal unit basis. Redefining the size of insignificant residual fragments and longer follow-up periods might further influence the SFR. In the present study, we did not detect any significant association between BSSU success and total stone burden in pan-urinary stone disease. We attained a favourable success rate despite a large number and volume of stones (Table 6) and a high proportion (70%) of nephrolithiasis in our patient population. Our present analysis also revealed that our BSSU success rate considerably increased over time, which is a result that is probably related to technical advancements and surgical experience. The prevalent use of JJ stents in our patients probably lead to a higher incidence of postoperative prolonged haematuria and pain. Hollenbeck et al. [2] noted that patients were more likely to have postoperative complications when ureteric stents were not placed after BSSU. Due to concerns about simultaneous renal damage resulting from bilateral urinary obstruction, JJ stents should be used in all BSSU patients. Our experience showed that postoperative stenting is appropriate when bilateral renal stones are treated by laser dusting or fragmenting. In addition, the treatment of impacted stones, the use of access sheaths, major ureteric damage, and prolonged operations may obligate post-procedural stenting. The interval of stenting was determined arbitrarily in the absence of established guidelines. Presently, we keep pull-string JJ stents for a week in uncomplicated cases. A unilateral ureteroscopy intraoperative complication rate of 6.3% and a postoperative complication rate of 3.5% were reported by de la Rosette et al. [22] in a prospective study. Early studies associated the BSSU procedure with higher complication rates, but recent studies have mostly reported minor complications ranging from 17% to 50.8% [16][17][18][19][20][21][22][23]. Even though we observed low-grade complications, as defined using the Clavien-Dindo classification system, the specific rates of pain, perforation, haematuria, and mucosal injury remained relatively high. All the current BSSU literature pertains to retrospective data, which is naturally prone to bias and some uncertainty in terms of adverse events, including perforations. The characterisation and reporting of endourological complications still lacks standardisation, the lack of which hinders the interpretation of surgical performance [24,25]. By implementing the modified Clavien-Dindo system, we could observe that most of the recorded complications did not correspond to any deviation from the ideal postoperative course of BSSU. Apparently, mucosal injuries or minor perforations had no impact on the safety of BSSU. On the other hand, we regarded the necessity of the secondary treatment of residual stones after BSSU as failure to cure complications rather than auxiliary procedures contributing to success rates. The present study had limitations. The retrospective review of an uncommon surgical approach is subject to bias in patient selection. Our study group comprised patients who underwent BSSU for varied combinations of bilateral pan-urinary stones. The stone characteristics are broadly heterogeneous in terms of burden, location, and complexity. However, we believe that this heterogeneity of pan-urinary stone disease in our study group denotes the originality of our report. Reports of BSSU in the literature are usually composed of either ureter/ ureter or kidney/kidney stones, but 55% of our patients had bilateral kidney and ureteric stones in various locations of the upper urinary system. The execution of BSSU may have also varied throughout the time interval of the study and with surgeon experience. The definition and reporting of success, complications, and follow-up were not standardised and may therefore be misleading. Most patients were not tested for renal function changes immediately after surgery. In two early cases with bilateral ureteric stones, stone migration was the cause of residual fragments. As a flexible ureterorenoscope was not available at that time, these patients' procedures were unsuccessful, and re-operations (i.e. SWL) were necessary. In general, the contraindications for a BSSU procedure are no different than those for a unilateral ureteroscopy, which are untreated UTI, urosepsis, and uncorrected bleeding diathesis. Furthermore, the experience of the surgical team and availability of the appropriate instruments are of the utmost importance for a successful outcome. Conclusion BSSU is a challenging endourological procedure. However, through the constant improvement of endoscopic technology and with the right expertise and experience, this procedure can now be performed successfully and safely. With a success rate of 82.8%, our study has presented further evidence concerning the effectiveness of this contemporary single-session approach to bilateral, pan-urinary stone disease. Nevertheless, prospective randomised studies are urgently needed to determine the best practice for the use of BSSU in the management of complex urolithiasis.
2018-04-03T03:45:35.471Z
2017-10-09T00:00:00.000
{ "year": 2017, "sha1": "b23ca058ceb3bd35f448063f496044f80f9283ff", "oa_license": "CCBYNCND", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1016/j.aju.2017.09.001?needAccess=true", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b23ca058ceb3bd35f448063f496044f80f9283ff", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
245884598
pes2o/s2orc
v3-fos-license
Systemic Delivery of mLIGHT-Armed Myxoma Virus Is Therapeutic for Later-Stage Syngeneic Murine Lung Metastatic Osteosarcoma Simple Summary Cancer metastasis to the lung represents the second most common site of metastasis, and a major challenge for clinical treatment of cancer, however, armed oncolytic viruses (Ovs) systemically delivered by carrier leukocytes represents a new treatment strategy. To study PBMC delivery of oncolytic myxoma virus armed with murine LIGHT (vMyx-mLIGHT), we exploited a later-stage syngeneic murine lung metastatic osteosarcoma model. Our results show that PBMC-delivered vMyx-mLIGHT is an effective treatment for even later-stage disease in vivo and offered superior tumor cell cytotoxicity in vitro. Taken together, vMyx-mLIGHT/PBMC therapy offers great promise to treat lung metastatic cancers. Abstract Cancers that metastasize to the lungs represent a major challenge in both basic and clinical cancer research. Oncolytic viruses are newly emerging options but successful delivery and choice of appropriate therapeutic armings are two critical issues. Using an immunocompetent murine K7M2-luc lung metastases model, the efficacy of MYXV armed with murine LIGHT (TNFSF14/CD258) expressed under virus-specific early/late promoter was tested in an advanced later-stage disease K7M2-luc model. Results in this model show that mLIGHT-armed MYXV, delivered systemically using ex vivo pre-loaded PBMCs as carrier cells, reduced tumor burden and increased median survival time. In vitro, when comparing direct infection of K7M2-luc cancer cells with free MYXV vs. PBMC-loaded virus, vMyx-mLIGHT/PBMCs also demonstrated greater cytotoxic capacity against the K7M2 cancer cell targets. In vivo, systemically delivered vMyx-mLIGHT/PBMCs increased viral reporter transgene expression levels both in the periphery and in lung tumors compared to unarmed MYXV, in a tumor- and transgene-dependent fashion. We conclude that vMyx-mLIGHT, especially when delivered using PBMC carrier cells, represents a new potential therapeutic strategy for solid cancers that metastasize to the lung. Introduction Lung metastatic tumors represent a major challenge for the clinical treatment of cancer. Lungs are the second most common site, after the liver, where tumors that start in other tissues tend to metastasize [1,2]. Common primary tumors that result in lung metastases include breast, colorectal, head and neck, urologic, melanoma, and bone [1,3]. Estimates across many studies of patients with tumors that form outside the lung have found that 20 to 50% will eventually have lung metastasis of their disease Current treatments for lung metastatic tumors include radiation, chemotherapy, surgical resection, and immunotherapies, usually used in combination, and with varying degrees of success. Five-year survival across lung metastatic tumors varies greatly based both on tumor origin and the current state of treatment of those tumors, however, is on average below 50% [4,5]. Osteosarcoma (OS) is the most common primary malignancy of the bone [6,7]. The occurrence of OS primarily happens in adolescence during times of rapid bone growth, however, there is a second peak of incidence in the elderly. Stage I and II non-metastatic OS tumors are historically treated through affected limb amputation and, more recently, limb salvage surgery and multi-agent chemotherapy, and have cure rates approaching 70% [8,9]. In contrast, patients diagnosed with lung metastatic osteosarcoma have less than a 30% rate of survival at 5 years after diagnosis. This is because of limitations of current treatment methods, which rely on the use of high-dose combinations of chemotherapy and surgical resection of lung metastases. Limitations in the treatment of metastatic osteosarcoma in part are because of the aggressive nature tumor, which leads to its high resistance to many treatment modalities [10]. One way to address low 5-year survival rates is to find new modalities of treating these tumors. Previous studies using the K7M2 lung metastatic osteosarcoma model have shown that immune checkpoint inhibitors anti-PD-L1 and anti-CTLA-4 are efficacious when treatment is initiated very early after tumor seeding [11,12]. However, these immunotherapies lose efficacy when used in later-stage disease [13]. One way to improve and complement immunotherapy is to combine ICIs with an oncolytic virus [14]. Oncolytic virotherapy, on the whole, is currently exploring questions based around the genetic arming of the virus, and how to effectively deliver the virus to metastatic sites of tumors that are not amenable to direct intratumoral injection delivery [15,16]. One virus being studied both for optimal design, through transgene arming, and for systemic delivery to hard-to-reach cancer is oncolytic myxoma virus (MYXV) [13,17,18]. MYXV is a member of the poxvirus family poxviridae, and the genus leporipoxvirus [19,20]. The natural evolutionary hosts of MYXV are new world lagomorphs where it causes mild disease in these reservoir animals [21]. On the other hand, MYXV famously was shown to cause a highly pathogenic disease called myxomatosis in European rabbits and was used as a biocontrol agent in invasive feral European rabbit populations [22]. However, MYXV is unable to cause disease in any non-rabbit host [19,20,23]. In contrast to this strict rabbit-specific tropism in vivo, the majority of murine and human cancer cells have undergone genetic compromise(s) in their intracellular anti-viral defense pathways and instead act phenotypically such as permissive rabbit cell lines in vitro and in vivo [23]. This safety profile for all mammalian hosts outside the rabbit, coupled with a natural cancer-tropism for cancer cells originated from diverse species including human, has allowed the development of MYXV as an oncolytic therapy against human cancer. MYXV is oncolytic even without additional genetic alteration of the backbone virus to reduce tropism or pathogenicity, however targeted genetic knockouts were made in MYXV that have further enhanced virally induced cell death or that alter viral immune modulation. Furthermore, MYXV, being a large and stable dsDNA virus, is an optimal candidate for engineering to express one or more therapeutic transgenes. To date, oncolytic MYXV has been tested in dozens of different murine, canine, and human cancer cell lines and many different xenograft and syngeneic murine models of cancer [24][25][26]. Recently, oncolytic virotherapy using MYXV armed with human Tumor Necrosis Factor (vMyx-hTNF) delivered systemically by leukocyte carrier cells was used alone and in combination with immunotherapy in the luciferase-tagged K7M2 lung metastatic osteosarcoma model [13]. This study showed that anti-PD-L1 and vMyx-hTNF each can act as successful monotherapies for lung metastatic osteosarcoma, but only if animals are treated relatively early after tumor seeding. Furthermore, when anti-PD-L1 therapy is used in combination with vMYX-hTNF the two modalities can act together to treat established tumors (as defined by threshold criteria of tumor cell luciferase expression Cancers 2022, 14, 337 3 of 18 levels of 5 × 10 5 luminescence units in the lung) at treatment start times, under conditions where both of the monotherapies were shown to be ineffective. These studies give powerful evidence that the combination of oncolytic viral therapy and ICIs is a potential new way forward in treatment in lung metastatic tumors [13,27]. However, even this combination loses efficacy in later-stage disease, thus stimulating further exploration for more effective transgenes expressed by oncolytic MYXV constructs. In this current study, we report that MYXV armed with murine LIGHT (TNF Superfamily member 14: TNFSF14) is also a positive hit in the K7M2-luc lung metastatic osteosarcoma model but possesses even more potent anti-cancer activities against advanced later-stage disease than vMyx-hTNF. LIGHT was found to be a powerful activator of the innate and adaptive immune responses [28]. LIGHT was originally discovered in the context of herpesvirus infection and acts to stimulate anti-viral T-cell proliferation [29]. Recent studies have shown that LIGHT can also act as a potent anti-tumor agent, particularly when delivered locally into solid tumor beds [30]. This anti-tumoral potency of LIGHT is thought to be a combination of two mechanisms. First, as with its ability to combat herpesvirus infections, LIGHT was found to also activate the acquired cellular immune system to stimulate tumor-specific memory T cell responses [29,31]. LIGHT was also found to have direct pro-apoptotic effects on some tumor cells through the lymphotoxin-beta receptor and also by potentially releasing tumor neo-antigens [31,32]. vMyx-mLIGHT was designed to take advantage of these two properties potentially to turn immune-cold tumors hot, and also to stimulate an improved cellular adaptive response to tumor antigens. To test vMyx-mLIGHT in this metastatic lung cancer model, we first assessed the ability of unarmed MYXV, and vMyx-mLIGHT to replicate in K7M2. We then assessed their abilities to increase survival and reduce tumor burden in an advanced later-stage disease model of the metastatic lung K7M2-luc tumor model following systemic delivery. We next examined if LIGHT-armed MYXV decreased K7M2 cell viability in vitro when the virus is directly added to cells vs. when the virus is first pre-loaded onto PBMCs. Finally, we looked at virus delivery to tumor differences as assessed by in vivo transgene expression, and infiltrating leukocytes, using vMyx-mLIGHT compared to unarmed MYXV when using PBMCs as carrier cells. Cell Culture, Autologous Carrier Leukocyte Collection, and Viruses Cell culture methodology is used as described in Christie et al., 2021. K7M2-luc cells tagged with Firefly luciferase reporter were gifted by Dr. Helman from the National Institute of Health [33]. K7M2-luc were maintained in DMEM/high glucose supplemented with 10% FBS and 1% penicillin/streptomycin. Cells were maintained at 37 • C and 5% carbon dioxide. Murine PBMCs were harvested from healthy age-matched BALB/cj mice via cardiac puncture and collected in 6.4% sodium citrate to prevent coagulation. PBMCs were isolated from whole blood using SepMate-50 from Stemcell Technologies (Vancouver, Canada), and Histopaque-1077 density gradient via centrifugation at 1200 g for 10 min. vMyx-GFP (wild-type MYXV expressing GFP under control of the poxvirus synthetic early/late promoter), vMyx-Fluc-tdTom (MYXV expressing firefly luciferase under control of the poxvirus synthetic early/late promoter, and tdTomato red controlled by the poxvirus late promoter P11) [17,34] and vMyx-hTNF (reporter GFP-expressing knockin of the human TNF gene inserted into the M131 locus of MYXV) constructs were used in this study [35][36][37]. vMyx-mLIGHT (MYXV expressing murine LIGHT (TNFSF14) under control of the poxvirus synthetic early/late promoter, firefly luciferase from the poxvirus synthetic early/late promoter, and tdTomato from the poxvirus late promoter P11, all inserted between M135 and M136 in the MYXV genome) used here was described previously but was referred to in that publication as vMyx-mLIGHT/Fluc/tdTr (Viral constructs are shown in Figure 1A) [17,38]. All assays were performed in triplicate; error bars are mean ± SD. * p < 0.01, ** p < 0.001, *** p < 0.0001. Viral Infection of PBMCs Virus infection of fresh autologous murine PBMCs was performed ex vivo at an MOI of 10 ffu per nucleated cell for 1 h at 37 • C to allow for virus adsorption into the cells at a volume of 100 uL of Dulbecco's phosphate-buffered saline (DPBS). After 1-h cells of adsorption, for in vivo experiments: virus-loaded cells were resuspended in DPBS to their final volume and infused systemically into recipient mice (2 × 10 6 cells/mouse) via retro-orbital injection. For in vitro assays infected PBMCs were added to K7M2 at indicated ratios. Animal Studies Female Balb/Cj mice were purchased from Jackson Laboratory (Bar Harbor, ME, USA) at 5 weeks of age. Animals were acclimatized for at least 7 days prior to tumor implantation. The mice were housed in the Biodesign Institute vivarium under sterile conditions with free access to food and water during the duration of the acclimatization period and study. All housing, husbandry and experimental protocols were carried out in accordance with approved IACUC protocols and institutional standards. At day zero, BALB/Cj mice were inoculated intravenously via lateral tail vein with 100 uL of DPBS containing 2 × 10 6 K7M2-Luc tumor cells. Animals that showed signs of primary tumor implantation in the tail or died prior to two weeks after tumor inoculation were excluded from the studies. Animals were then monitored for tumor progression until group lung tumor average reached 5 × 10 6 luminescence units when tumors become resistant to vMyx-hTNF + anti-PD-L1 combination therapy, which was defined as advanced later-stage disease. Animals were then assigned to treatment groups such that average tumor burden across each group remained at 5 × 10 6 luminescence. Oncolytic Myxoma Virus Treatments Tumor-bearing animals treated were systemically infused via the retro-orbital route with virus after virus pre-loading ex vivo onto PBMCs for one hour as previously described. Animals were first anesthetized using isoflurane, after anesthetization, were given 100 µL of 2 × 10 7 /mL PBMCs infected with of respective virus injected via the retro-orbital route. Animals were then monitored for 30 min for post-injection side effects. Animals were virus/PBMC-treated four times (multi-dose) every fourth day starting on treatment start date, as previously described in Christie et al. [13]. Immune Checkpoint Inhibitor Treatments BALB/cj animals were inoculated with K7M2-luc murine tumor cells and treated with anti-PD-1 as previously described in Lussier et al. [11][12][13]. At treatment start date, tumor-bearing animals were treated with immune checkpoint inhibitor anti-PD-1 (BioXcell CD279, (Lebanon, PA, USA). Animals were treated with 10 mg/kg in a final volume of 100 µL via Intraperitoneal (IP) injection. Animals were treated with anti-PD-1 four times every 3rd day, starting at treatment start date. Viral Replication Assay To test if transgene might alter replicative ability of MYXV in K7M2-luc cells, 1 × 10 5 cells were infected with vMyx-GFP, vMyx-hTNF, or vMyx-mLIGHT at two different MOIs (0.5 Cancers 2022, 14, 337 6 of 18 and 5, to assess multi-step and single-step virus growth, respectively) for 24, 48 or 72 h. At given time points cells were collected by scraping them off each well and collected with medium. Cells and medium were then subjected to three cycles of freeze-thaw and then sonicated for 1 min. Viruses were then titered by serial dilution on RK13 cells and fluorescent foci were quantified. Cell Viability Assay To assess if MYXV infection decreased viability of K7M2-luc cells, 10,000 cells were seeded in each well of 96-well plate. After one day, cells were infected with two different MOIs (1 and 10) of two different MYXV constructs (vMyx-GFP, vMyx-mLIGHT). To test if virus delivered via infected PBMCs further decrease viability of the target K7M2-luc cells, PBMCs were infected with each of the virus constructs at an MOI of 10, and then co-cultured with target cells at a ratio of either 1:1 or 10:1 virus/PBMCs:K7M2-luc cells. At 24-, 48-and 72-h cell viability was assessed using MTS assay. Ex Vivo Infection Assay of Tumor-Bearing Lung Samples Animals with advanced later-stage tumors had K7M2-luc tumors excised and measured for total luminescence. Lungs were then divided in two such that each half exhibited comparable luminescence levels. Tumors were then infected ex vivo with 2 × 10 7 virus in 10% FBS DMEM and allowed to incubate at 37 • C and 5% carbon dioxide for 36 h. Tumors were read using a Perkin Elmer IVIS Lumina III (PerkinElmer, INC, Waltham, MA, USA) In Vivo Imaging system fluorescence imaging capability for levels of tdTomato red expression, indicative of virus late gene expression from virus-infected tumor cells in the lung. In Vivo Imaging of K7M2-Luc Tumors Tumor progression was assessed using a Perkin Elmer IVIS Lumina III In Vivo Imaging system. Animals were IP injected with 100 uL of D-Luciferin suspended in DPBS (30 mg/mL). Animals were then sedated using isoflurane and were imaged for 1 min using the IVIS Lumina III system. Following imaging, tumor luminescence levels were measured using Caliper Life Science Live Image v4.5 (PerkinElmer, INC, Waltham, MA, USA). Tumor signals were measured for 95% radiance using the program's automatic drawing application and were usually first detectable in >95% of control mice by approximately 1-2 weeks after tumor implantation. In untreated tumor-inoculated mice, the acquisition of 5 × 10 6 radiance units from the lung was defined as advanced later-stage disease, which was used as the criteria for inclusion in the cohorts described in Figure 2. Generally, endpoint euthanasia criteria were met after 10 8 radiance units were detected in the lung. Euthanasia criteria were assessed based on a combination of the animal's breathing, energy level (lethargy), and ability to ambulate/neurological symptoms. Statistical Analysis Tumor growth was determined using radiance units defined as photons/seconds/cm 2 / steradian using an automatically determined Regions of Interest (ROI) based on a threshold of a minimum of 5% of peak photon intensity. Statistical analysis for this study was carried out using Prism Graphpad (Version 9, Graphpad, San Diego, CA, USA). Differences for Kaplan-Meier survival curves were determined using Log-rank tests. Differences in tumor radiance determined by imaging, viral fluorescence were determined using unpaired T-tests. MTS readouts were analyzed using two-way ANOVA and Tukey post-hoc analysis. Statistical Analysis Tumor growth was determined using radiance units defined as photons/seconds/cm 2 /steradian using an automatically determined Regions of Interest (ROI) based on a threshold of a minimum of 5% of peak photon intensity. Statistical analysis for this study was carried out using Prism Graphpad (Version 9, Graphpad, San Diego, CA, USA). Differences for Kaplan-Meier survival curves were determined using Log-rank tests. Differences in tumor radiance determined by imaging, viral fluorescence were determined using unpaired T-tests. MTS readouts were analyzed using two-way ANOVA and Tukey post-hoc analysis. and multistep virus replication curves were performed. For single-step replication, cells were infected with a multiplicity of infection (MOI) of 5, for multi-step replication an MOI of 0.5 was used. At an MOI of 5, vMyx-mLIGHT had significantly lower progeny viral yield compared to unarmed wild-type MYXV in K7M2-luc cells at both 24 and 48 HPI ( Figure 1B). However, MYXV and vMyx-mLIGHT are only significantly different at 24 HPI. vMyx-mLIGHT yielded significantly more progeny virus at 48 HPI than vMyx-hTNF, whereas vMyx-hTNF had a significantly higher titer at 24 HPI. Taken together with previously published results that show unarmed MYXV not being an effective oncolytic virus in the lung metastatic osteosarcoma model, the higher replication of the unarmed virus in vitro at the very least is not associated with better anti-cancer efficacy of TNF-armed MYXV in this model in vivo [13]. However, our evidence suggests that MYXV-expressed hTNF, but not mLIGHT, in K7M2 cells may reduce somewhat the permissiveness of cultured K7M2 cells to MYXV infection. Armed MYXV Constructs Exhibit Greater Oncolytic Activity In Vitro against K7M2-Luc Cells in a PBMC-Dependent Manner To understand if the transgene arming of MYXV was responsible for at least part of the increased efficacy of vMyx-mLIGHT, compared to unarmed MYXV, cultured K7M2-luc cells were infected in vitro with each construct at two different MOIs, 1 and 10 ( Figure 1C) and cell viability (as assessed by mitochondrial function) was measured using an MTS assay. Results showed that the unarmed MYXV decreased viability of K7M2-luc at both MOIs and induced significantly lower cell viability than either of the transgene-armed viruses at 72 h vMyx-mLIGHT did not change the viability of infected cells at any point over the three time points at an MOI of 1. At an MOI of 10, there was a decrease in cell viability at 48 h, however, there was an increase between 48 and 72 h. We interpret these findings to mean that, at lower MOIs in particular, the transgene cytokines each exerted a stimulatory property on the cellular metabolism of the K7M2-luc target sells. To assess if PBMCs, which are used as carrier cells for the in vivo experiment testing advanced later-stage disease ( Figure 1D), could play a more direct role in cell killing, an in vitro co-culture experiment was performed using the MYXV constructs pre-loaded onto PBMCs to infect the target K7M2-luc cells ( Figure 1D). Autologous murine PBMCs were pre-infected for 1 h with each test virus, washed to remove free virus, mixed with untreated K7M2-luc target cells and then allowed to co-incubate for up to 72 h at two different ratios, 1:1 and 10:1 Virus/PBMC: K7M2-luc cell. First, it was found that the mixture of K7M2-luc cells with uninfected control PBMCs alone did not change any cellular viability parameters after 72 h. Finally, each virus treatment was compared for changes in cell viability in the presence or absence of PBMCs preloaded with the virus ( Figure 1D). Unarmed MYXV at an MOI of 10 did not show any change from PBMC preloading in reducing K7M2-luc cell viability, there was a significant decrease at 72 h at an MOI of 10 compared to both the 1:1 and 10:1 PBMC-to-target K7M2luc treatments. vMyx-mLIGHT/PBMCs induced the greatest K7M2 viability decrease by 72 h post-infection. Overall, we conclude that PBMC-preloading of MYXV increased the in vitro cell-killing potential of each transgene armed MYXV compared to infection with naked virus alone, in a fashion that correlates well with the increased efficacy of tumor regression levels observed for PBMC preloading of each virus over intravenous infusion of the naked virus in vivo. PBMC-Delivered LIGHT-Armed MYXV Shows Superior Anti-Tumor Activity Compared to vMyx-TNF Therapy in Advanced Later-Stage Lung Disease In our previously published study, we showed that vMyx-TNF/PBMC and the combination of vMyx-hTNF/PBMC + ICIs showed efficacy both in an early intervention model (i.e., treatment intervention beginning at 3 days post-tumor inoculation) and in an established disease model (i.e., treatment intervention beginning at group average lung tumor luminescence of 5 × 10 5 ). To test if PBMC-delivered mLIGHT-armed MYXV conferred increased therapeutic efficacy in the K7M2-luc lung metastatic model, vMyx-mLIGHT/PBMC and the combination vMyx-mLIGHT/PBMC + anti-PD-1 was tested in a more advanced later-stage model of lung disease. In this model (hereafter referred to as the advanced disease model), animals were treated when lung tumor burden of K7M2-luc was on average ten-fold higher than the established disease model (i.e., disease tumor burden in the lung increased 10 fold from 5 × 10 5 to 5 × 10 6 group average lung tumor luminescence units). To set up this later-stage model, animals were tumor inoculated at time 0 with 2 × 10 6 K7M2-luc tumor cells via lateral tail vein infusion. The bulk of these K7M2-luc cells seed and proliferate in the lung, but a minority of cells also seed into the liver and spleen but do not grow into macroscopic tumors at these latter two sites. Animals were then monitored for disease progression using IVIS imaging until cohort average reached 5 × 10 6 luminescence units per mouse lung. Animals were then randomly assigned to treatment groups such that each group maintained this average luminescence. Animals were then systemically treated with either: anti-PD-1 alone, vMyx-mLIGHT/PBMC alone, vMyx-TNF/PBMC + anti-PD-1, vMyx-mLIGHT/PBMC + anti-PD-1 or left untreated. The viruses were pre-loaded onto autologous donor mouse PBMCs for 1 h ex vivo prior to intravenous infusion. Animals were treated 4 times with ICI (every 3rd day), virus/PBMC treatment (every 4th day) or combination ICI + virus/PBMC treatment (Figure 2A). Animals were then monitored for tumor progression in real-time through luciferase activity as measured by whole-body luminescence using IVIS imagining. Animals treated with either vMyx-mLIGHT/PBMC or with vMyx-mLIGHT/PBMC + anti-PD-1 survived significantly longer than animals that were left untreated, treated with either anti-PD-1 alone, or with the combination of vMyx-hTNF/PBMC + anti-PD-1 ( Figure 2B). Images of the luciferasetagged tumors ( Figure 2C) show the progression of tumors that were either left untreated (left) or which were treated with the combination of vMyx-mLIGHT/PBMC + anti-PD-1 (right). The top row shows animals had similar tumor luminescence levels at the treatment start date. Control tumor-bearing animals which were left untreated had lung tumors that all increased in luminescence between when treatment started, and 4 weeks post-treatment start. However, of the seven animals that received vMyx-mLIGHT/PBMC + anti-PD-1, five animals showed tumor regression, including three which regressed to the point below the detection threshold, and one animal for which the tumor remained essentially static throughout the study period. One animal still remained completely tumor-free by day 140 at the end of this study. Finally, group tumor progression was plotted for 6 weeks after treatment started ( Figure 2D). We conclude that vMyx-mLIGHT/PBMC therapy is efficacious in reducing tumor burden in this later-stage model, and there is further enhancement of efficacy when combined with the ICI anti-PD-1. Delivery of PBMC-Loaded vMyx-mLIGHT Induces Enhanced Transgene Expression in Tumor Bearing Animals In our previous study with vMyx-hTNF in the early stage K7M2 disease model, we were able to show that PBMCs pre-loaded with unarmed vMyx-Fluc reporter virus in K7M2 tumor-bearing mice had sustained luminescence over 36 h that was not seen in the absence of PBMC carrier cell or in tumor-free control animals [13]. To test if vMyx-mLIGHT might change the expression level of the virus-encoded transgene, or whether it might alter the migration of carrier cells into the lung tumor bed, animals were inoculated with untagged K7M2 cells so that both virus-derived Fluc luminescence signals (that monitor viral gene expression in both the periphery and the tumor bed) and tdTomato red fluorescence signals (that monitor viral gene expression in tumor cells only) could be assessed in parallel. After animals were implanted with untagged K7M2 cells, they were then monitored for symptoms indicative of advanced later-stage lung disease (i.e., 25 days post-tumor inoculation). When all animals showed symptoms of lung metastatic osteosarcoma (through changes in breathing, activity, and behavior), these advanced laterstage disease animals were then treated with test virus, either that was pre-bound onto PBMCs, or else systemically administered as unbound "free" virus. Animals were then monitored for up to 36 h for luciferase luminescence. Animals were then compared for total virus-derived luminescence signals controlled by an early/late promoter that would drive transgene signal both in peripheral leukocytes as well as from tumor cells ( Figure 3A,B). Images of animals at 3, 6, 24 and 36 h post-treatment show there is significantly more whole-body luminescence at 3 and 6 h irrespective of whether the recipients have lung tumors or no tumor and whether carrier cells were used for virus delivery ( Figure 3A). However, vMyx-mLIGHT/PBMCs in tumor-bearing animals produced significantly more luminescence at 3 and 6 h than the unarmed MYXV pre-loaded onto PBMCs in tumorbearing animals. At later time points, specifically 36 h post-virus treatment, we observed that PBMCs pre-loaded with vMyx-mLIGHT induced higher luminescence levels in tumorbearing animals than in animals without tumors, or in tumor-bearing animals that were treated with the intravenous systemic naked virus treatment ( Figure 3B). sessed in parallel. After animals were implanted with untagged K7M2 cells, they were then monitored for symptoms indicative of advanced later-stage lung disease (i.e., 25 days post-tumor inoculation). When all animals showed symptoms of lung metastatic osteosarcoma (through changes in breathing, activity, and behavior), these advanced laterstage disease animals were then treated with test virus, either that was pre-bound onto PBMCs, or else systemically administered as unbound "free" virus. Animals were then monitored for up to 36 h for luciferase luminescence. Animals were then compared for total virus-derived luminescence signals controlled by an early/late promoter that would drive transgene signal both in peripheral leukocytes as well as from tumor cells ( Figure 3A,B). Images of animals at 3, 6, 24 and 36 h post-treatment show there is significantly more whole-body luminescence at 3 and 6 h irrespective of whether the recipients have lung tumors or no tumor and whether carrier cells were used for virus delivery ( Figure 3A). However, vMyx-mLIGHT/PBMCs in tumor-bearing animals produced significantly more luminescence at 3 and 6 h than the unarmed MYXV pre-loaded onto PBMCs in tumor-bearing animals. At later time points, specifically 36 h post-virus treatment, we observed that PBMCs pre-loaded with vMyx-mLIGHT induced higher luminescence levels in tumor-bearing animals than in animals without tumors, or in tumor-bearing animals that were treated with the intravenous systemic naked virus treatment ( Figure 3B). Trafficking of MYXV to Tumor Beds In Vivo Is Increased by Both PBMC Pre-Loading and Expression of the LIGHT Transgene Previously published data showed that autologous PBMCs as MYXV carrier cells increased the level of virally expressed tdTomato red transgene in K7M2 lung tumors 36 h post-treatment [13]. To determine if the expression of mLIGHT in the virus-infected carrier cells might alter the trafficking of the MYXV-loaded leukocytes, vMyx-mLIGHT was loaded onto PBMC carrier cells and compared to systemic administration of the same amount of unbounded naked virus into mice bearing later stage untagged K7M2 tumors. Trafficking of MYXV to Tumor Beds In Vivo Is Increased by Both PBMC Pre-Loading and Expression of the LIGHT Transgene Previously published data showed that autologous PBMCs as MYXV carrier cells increased the level of virally expressed tdTomato red transgene in K7M2 lung tumors 36 h post-treatment [13]. To determine if the expression of mLIGHT in the virus-infected carrier cells might alter the trafficking of the MYXV-loaded leukocytes, vMyx-mLIGHT was loaded onto PBMC carrier cells and compared to systemic administration of the same amount of unbounded naked virus into mice bearing later stage untagged K7M2 tumors. Tumors were then excised 36 h after this treatment and were measured using IVIS for late expressing virus-encoded tdTomato signal that is indicative of later-stage virus replication within the K7M2 tumor cells (Figure 4A-C). Lungs, liver and spleen from tumor-bearing animals were excised and imaged for tdTomato signal ( Figure 4A,B). All three of three lungs from animals treated with vMyx-mLIGHT/PBMCs were found to elaborate detectable tdTomato signal, whereas only one of three lungs from animals systemically treated with free unbound vMyx-mLIGHT had detectable tdTomato signal. No statistical difference was found between animals treated with LIGHT-armed vs. unarmed virus, either for LIGHT-armed virus delivered by PBMC carrier cells or after systemic delivery of free virus. However, it was found that all three animals in the vMyx-mLIGHT/PBMCs group expressed td-Tomato signal, whereas only one of three animals treated with the free vMyx-mLIGHT had tdTomato signal, and two of three animals with the vMyx-Fluc/PBMCs had detectable tdTomato signal. In addition, it was found that the average tdTomato fluorescence signal was higher in the animals treated with vMyx-mLIGHT/PBMCs compared to both the free vMyx-mLIGHT treatment and the unarmed vMyx-Fluc/PBMCs treatment. Finally, to test if this difference in averages between armed vMyx-mLIGHT/PBMCs and unarmed vMyx-Fluc/PBMCs might be driven by any differences in expression levels of the reporter tdTomato gene on a per virus-infected cell basis, in contrast to any difference in the actual delivery of virus load into the tumor bed, later stage K7M2-luc tumors were excised from animals, divided equally on the basis of luciferase luminescence to standardize tumor load, and then ex vivo infected with 2 × 10 7 free virus (either vMyx-mLIGHT or vMyx-Fluc) for 36 h. At 36 h post-infection tdTomato signal was measured and was found to be no different for the LIGHT-armed vs. unarmed MYXV, indicating that the increase in tdTomato signal observed in tumor tissue in vivo after systemic delivery of vMyx-mLIGHT/PBMCs was due to increased trafficking of carrier cells infected with vMyx-LIGHT into tumorbearing lungs ( Figure 4D,E). We also observed a trend towards increased levels of CD3 + lymphocytes and iNOS + in the lungs of mice treated with vMyx-LIGHT, but larger cohort studies would need to be conducted in order to evaluate the statistical significance of the observation. Fluc/PBMCs had detectable tdTomato signal. In addition, it was found that the average tdTomato fluorescence signal was higher in the animals treated with vMyx-mLIGHT/PBMCs compared to both the free vMyx-mLIGHT treatment and the unarmed vMyx-Fluc/PBMCs treatment. Finally, to test if this difference in averages between armed vMyx-mLIGHT/PBMCs and unarmed vMyx-Fluc/PBMCs might be driven by any differences in expression levels of the reporter tdTomato gene on a per virus-infected cell basis, in contrast to any difference in the actual delivery of virus load into the tumor bed, later stage K7M2-luc tumors were excised from animals, divided equally on the basis of luciferase luminescence to standardize tumor load, and then ex vivo infected with 2 × 10 7 free virus (either vMyx-mLIGHT or vMyx-Fluc) for 36 h. At 36 h post-infection tdTomato signal was measured and was found to be no different for the LIGHT-armed vs. unarmed MYXV, indicating that the increase in tdTomato signal observed in tumor tissue in vivo after systemic delivery of vMyx-mLIGHT/PBMCs was due to increased trafficking of carrier cells infected with vMyx-LIGHT into tumor-bearing lungs ( Figure 4D,E). We also observed a trend towards increased levels of CD3 + lymphocytes and iNOS + in the lungs of mice treated with vMyx-LIGHT, but larger cohort studies would need to be conducted in order to evaluate the statistical significance of the observation. Discussion Treatment of lung metastases of solid cancers originating from sents a major challenge in increasing 5-year survival rates across many tumor types [5]. Most treatments look to exploit the unique biology of Discussion Treatment of lung metastases of solid cancers originating from other tissues represents a major challenge in increasing 5-year survival rates across many different metastatic tumor types [5]. Most treatments look to exploit the unique biology of a given tumor type, treating each type of metastasis as an island onto itself. Oncolytic viruses, such as MYXV, that exhibit a broad ability to infect different types of cancer cells irrespective of tissue type, offers a new way to approach therapy of these tumors after they have metastasized away from their site of origin [25]. The two biggest questions facing next-generation oncolytic virotherapy for metastatic cancer are optimal transgene arming(s) and a translatable systemic delivery strategy to hard-to-reach tissue sites of metastases such as in the lung. The advantage of the K7M2 syngeneic mouse model of lung metastasis for studying MYXV oncolysis is that, unlike many preclinical cancer models we explored previously, unarmed MYXV is largely ineffective as a therapy against K7M2 lung tumors and thus the model provides a potent screening strategy for evaluating our library of MYXV-encoded anti-cancer transgenes. Indeed, this screening with the K7M2 model revealed two distinct MYXV-expressed transgenes that provided tumor regression efficacy, namely human TNF and murine LIGHT (this study) [13]. In our recent study exploring MYXV treatment of metastatic K7M2 lung disease in immunocompetent mice, we showed that, whereas unarmed MYXV was ineffective as therapy, TNF-armed MYXV when systemically delivered on PBMC carrier cells offered a major therapeutic benefit for both early-stage disease (i.e., treatment started 3 days after K7M2-luc tumor cell implantation), and for established lung disease (i.e., treated when lung disease was first detectable by IVIS and defined as 5 × 10 5 luminescence units derived from pre-seeded K7M2-luc cells) when combined with immune checkpoint inhibitor co-therapy [13]. However, this therapeutic benefit of TNF-armed MYXV/PBMC therapy, with or without ICI co-therapy, is lost In the later stage advanced disease model (defined as treatment beginning only after a lung disease burden of 5 × 10 6 luminescence units is achieved). The failure of this therapeutic effect of vMyx-hTNF/PBMC in the advanced later stage K7M2 lung disease model is likely complex, but the animals are considerably closer to their end-stage at this late time of treatment, and it is believed that oncolytic virotherapy depends upon downstream immune engagement with the virus-infected tumor beds after the virus was delivered. Additionally, the failure of vMyx-hTNF against later-stage disease might be possibly also explained by the contradictory effects TNF can have on many tumors [39]. MYXV armed with murine LIGHT was also shown highly effective in another aggressive cancer model, a murine pancreatic cancer model, as shown by the ability of mLIGHT-armed MYXV to increase survival and reduce tumors [38]. This therapeutic benefit of vMyx-mLIGHT against pancreatic cancer was enhanced when the virus was delivered to pancreatic cancer tissue using mesenchymal stem cells (MSCs) as carrier cells. Given these results, and insights gained studying vMyx-hTNF treatment for the early stage K7M2-luc induced lung metastases, LIGHT has emerged as a promising therapeutic transgene for treating advanced cancer. In vitro virus replication experiments showed that vMyx-mLIGHT does not exhibit a significant suppression of viral replication compared to the unarmed construct ( Figure 1). Using an MTS assay that measures mitochondrial function as a surrogate for cell viability, we observed an intermediate level of reduced cell viability for low MOI K7M2infected cells for LIGHT-armed MYXV compared to the unarmed virus. Interestingly, vMyx-mLIGHT induced a further decrease in target cell viability when PBMCs were first pre-loaded with virus prior to co-culturing them with target K7M2 cells. This increased cell killing effect of PBMC-loaded vMyx-mLIGHT, compared to virus alone or PBMCs alone, could be explained by the ability of virus-encoded LIGHT (which is expressed as a cell surface ligand from a constitutive early/late viral promoter in most primary leukocytes) to activate both CD8+ T cells and NK cells, both of which are powerful inducers of cytotoxicity against tumor cells [40]. These in vitro results were promising and then extended to the in vivo K7M2 advanced later-stage lung disease model. To better understand differences in vivo between these two transgene-armed viruses, the advanced later-stage K7M2-luc lung disease model (defined by the 10x fold higher level of lung luminescence from K7M2-luc tumors prior to treatment start and being refractory to previously efficacious ICI plus TNF-armed MYXV treatments) was used. In this advanced later-stage model, lung tumor-bearing animals treated with either vMyx-mLIGHT/PBMC as a monotherapy, or the combination of vMyx-mLIGHT/PBMC plus anti-PD-1 therapy survived significantly longer than animals treated with either anti-PD-1 alone or comparable vMyx-TNF/PBMC plus anti-PD-1 combination therapy. This included one animal in the vMyx-mLIGHT/anti-PD-1 cohort remaining completely tumor-free 120+ days post-tumor inoculation and other animals that did not completely regress tumors but saw significant reductions in their tumor burden. While the goal will always be to completely cure all tumors in the entire treated cohort, significant reductions in tumor burden in the clinical setting can translate to longer symptom-free survival. It can also mean that cancer patients who were not eligible for other forms of treatments such as surgical resection, ICIs or chemotherapy may now become eligible [41,42]. We also looked at further issues around PBMC carrier cell delivery of MYXV to lung tumors that were first broached in our previous manuscript [13]. Firstly, we now show in vitro that PBMCs, when pre-infected with vMyx-mLIGHT, were able to reduce K7M2 target cell viability beyond the levels seen with either of the naked viruses alone or uninfected PBMCs. Previously published studies have shown that primary leukocytes from mice and humans can permit MYXV binding, entry and early viral gene expression but are not able to support late stages of MYXV replication [18,43,44]. This points to a role of donor PBMCs not just in virus delivery to the tumor bed in the lung, but potentially also for augmenting direct tumor cell killing. Secondly, constitutive transgene expression from virus-encoded reporter luciferase in vivo was assessed. It was found that this signal, which would emanate from both virus-infected circulating leukocytes in the periphery as well as from virus-infected K7M2 cancer cells, was durable out to 36 h specifically in tumorbearing animals and when the virus was delivered by PBMCs but was significantly higher in the animals treated with vMyx-mLIGHT at most time points out to 36 h. This increase in luciferase activity from both primary and cancerous cells infected with vMyx-mLIGHT is an indirect measure of the expression of transgene expression driven by the poxvirus synthetic early/late promoter and is believed to be an indirect surrogate indicator of the expression of LIGHT, which is controlled by the same viral promoter as the luciferase reporter. Thirdly, we assessed if more virus signals could be detected in the tumor beds using virally expressed tdTomato red fluorescence signal controlled by a late viral promoter that fires almost exclusively in virus-infected tumor cells only. Of the lung tumors excised, all (3 of 3) tumors from animals treated with vMyx-mLIGHT/PBMCs were found to emanate detectable tdTomato signal, compared to a majority (2 of 3) with unarmed vMyx-Fluc/PBMCs, and a minority (1 of 3) with systemic intravenous delivery of free vMyx-mLIGHT alone. Tumors from animals treated with vMyx-mLIGHT/PBMCs also had higher average fluorescence per tumor. Fourthly, to assess if differences in average virus-derived luciferase (in all virus-infected cells) and tdTomato signal (in virus-infected tumor cells only) was driven either by increased expression of transgene on a per virus-infected cell basis or else delivery of a larger virus load to the tumors, excised tumor-bearing lungs were infected ex vivo with vMyx-mLIGHT or vMyx-Fluc and measured for expression levels of tdTomato at 36 h. It was observed that the tumors infected with either virus exhibited essentially equal signals when infected with the same amount of starting virus. This result means that differences seen in tumors treated in vivo with PBMC-loaded armed vs. unarmed virus (where LIGHT-armed MYXV produced increased reporter tdTomato transgene expression) excised 36 h post-treatment is a function of increased delivery of vMyx-mLIGHT virus to lung tumors by the PBMC carrier cells. We conclude that even subtle changes in delivery can have profound effects on the efficacy of the therapy. Future studies of vMyx-mLIGHT in other syngeneic lung metastatic models are necessary to determine if the considerable therapeutic effects described in this manuscript are translatable to other types of lung metastases, or if this is limited to metastases initiated by osteosarcoma. Furthermore, testing whether vMyx-mLIGHT might exert cytotoxic effects on PBMC samples from patients with lung metastatic osteosarcoma should be carried out. Finally, testing vMyx-mLIGHT with other potentially synergistic therapies in this model can be performed to better understand how vMyx-mLIGHT could complement other current standards of care. Conclusions In summation, vMyx-mLIGHT, when systemically delivered with autologous PBMCs pre-loaded ex vivo with a virus, can be used in combination with immune checkpoint immunotherapy as a successful therapeutic strategy in the K7M2 lung metastatic osteosarcoma model, and this combination therapy shows the ability to regress and even eliminate advanced metastatic lung tumors in an advanced later-stage disease model. Informed Consent Statement: Not applicable. Data Availability Statement: The data presented in this study are available on request from the corresponding author.
2022-01-13T16:20:31.285Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "b9bf698cf93a1882b01d7f4d94751ca012ba65b5", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6694/14/2/337/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fda1e1cfd232f960d6abd0c0ccf937b689e5c1ba", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247070654
pes2o/s2orc
v3-fos-license
Association of SARS-CoV-2 Vaccinations with SARS-CoV-2 Infections, ICU Admissions and Deaths in Greece The available coronavirus disease 2019 (COVID-19) vaccines have shown their effectiveness in clinical trials. We aimed to assess the real-world effects of SARS-CoV-2 vaccinations in Greece. We combined national data on vaccinations, SARS-CoV-2 cases, COVID-19-related ICU admissions and COVID-19-related deaths. We observed 3,367,673 vaccinations (30.68% of the Greek population), 278,821 SARS-CoV-2 infections and 7401 COVID-19-related deaths. The vaccination rate significantly increased from week 2 to week 6 by 85.70%, and from week 7 to 25 by 15.65%. The weekly mean of SARS-CoV-2 cases, COVID-19 ICU patients and COVID-19 deaths markedly declined as vaccination coverage accumulated. The rate of SARS-CoV-2 cases increased significantly from week 2 to week 13 by 16.15%, while from weeks 14–25 the rate decreased significantly by 13.50%. The rate of COVID-19-related ICU admissions decreased significantly by 7.41% from week 2 to week 4, increased significantly by 17.22% from weeks 5–11, then decreased significantly from weeks 17–20, by 11.99%, and from weeks 21–25, by 16.77%. The rate of COVID-19-related deaths increased significantly from week 2 to week 15 by 12.08% and decreased significantly by 16.58% from weeks 16–25. The data from this nationwide observational study underline the beneficial impact of the national vaccination campaign in Greece, which may offer control of the SARS-CoV-2 pandemic. Introduction Coronavirus Disease 19 has emerged as an ongoing pandemic that has resulted in more than 5 million deaths worldwide [1]. The fatality ratio of COVID-19 cases (i.e., the number of deaths divided by the number of diagnoses) ranges by region from 19.48% to below 1% [2]. Mortality is increased in elderly patients, males and subjects with comorbidities such as diabetes, arterial hypertension and cardiovascular disease [3]. In response to the pandemic, several attempts have been made to control the dissemination of the SARS-CoV-2 infection in order to reduce COVID-19-related morbidity and mortality. Recently, authorization of COVID-19 vaccines took place, soon after the publication of the initial phase 3 trials [4], and to date more than 8 billion vaccine doses have been administered worldwide [2]. In randomized clinical trials, their rates of effectiveness against symptomatic COVID-19 were 95% for the BNT162b2 vaccine (Pfizer BioNTech), 70.4% for the ChAdOx1 nCoV-19 vaccine (AZD1222, Astra Zeneca) and 94.1% for the mRNA-1273 SARS-CoV-2 vaccine (Moderna) [5][6][7]. The emergency development of COVID-19 vaccines, as well as beliefs that these vaccines are not effective, have led to negative attitudes and vaccine hesitancy worldwide [8,9]. Large epidemiological studies are warranted in order Vaccines 2022, 10, 337 2 of 12 to estimate the real-world effectiveness of COVID-19 vaccines in order to overcome the aforementioned barriers to vaccine acceptance. The first case of COVID-19 in Greece was observed on the 26 February 2020 [10]. The vaccination campaign in Greece began on the 4 January 2021, during the second pandemic wave [11]. A third pandemic wave occurred in Greece during March 2021 and the total COVID-19 cases exceeded 200,000 [12]. By the end of May 2021, Greece recorded a total of 400,000 COVID-19 cases and entered the resolution phase of the third wave. The temporal impact of the vaccination campaign in the course of the pandemic in Greece has not been reported previously. In the present study we aim to address the effectiveness estimates for COVID-19 vaccinations in the first 25 weeks of the national vaccination campaign in Greece. To this end, we associated the rate of vaccinations against SARS-CoV-2 with the rate of new SARS-CoV-2 cases, COVID-19-related ICU admissions and COVID-19-related deaths. Study Population We used national surveillance data to address the association of SARS-CoV-2 vaccination on COVID-19 outcomes (i.e., new SARS-CoV-2 cases, COVID-19-related ICU admissions and COVID-19-related deaths). We analyzed data from the 1st to the 25th week of the vaccination campaign. Data concerning vaccinations were retrieved from the European Centre for Disease Prevention and Control (ECDC) [13]. Vaccination trend analysis included data from fully completed vaccinations. Data about COVID-19 infections and outcomes were retrieved from the National Public Health Organization (NPHO) of Greece [14]. The nationwide vaccination campaign in Greece began on the 4 January 2021. The vaccination was initially applied to health care workers (1st week), subjects older than 85 years old (2nd week), while on the 3rd week of vaccinations, vaccine availability extended to subjects in the age group 80-84 years. At week 6, vaccination was extended to citizens of 60-64 years old and 75-79 years old. At week 13, vaccine availability was extended to ages 65-69 years, at week 16, to ages 50-59 years and at week 17, to ages 30-49 years. The vaccination program was started with the Pfizer/BioNTech vaccine and later the Oxford/Astra Zeneca, Moderna and Johnson & Johnson were subsequentially introduced. Vaccination was free of charge for the subjects vaccinated. The three outcomes assessing vaccination effectiveness were as follows: 1. SARS-CoV-2 cases were defined as laboratory confirmed SARS-CoV-2 cases (symptomatic or asymptomatic); 2. Severe COVID-19 patients were defined as those admitted to the ICU; 3. Deaths attributed to COVID-19 were defined as deaths in patients with confirmed COVID-19. The National Public Health Organization of Greece (NPHO) collects the data from all the diagnostic laboratories and reports all the daily laboratory confirmed cases of SARS-CoV-2 infections. All hospitals provided daily updates on the number of new cases, ICU admissions, deaths, etc., and the data are provided to the National database. ECDC data on vaccinations are reported for the following age groups: 18-24 years, 25-49 years, 50-59 years, 60-69 years, 70-79 years and ≥80 years, while NPHO reports data for the age groups 0-17 years, 18-39 years, 40-64 years and ≥65 years. In order to perform subgroup analysis of the vaccination effect according to age, we grouped the data for patients aged <69 years old in ECDC with those of patients aged ≤65 years in NPHO files while data from patients ≥70 years in ECDC and ≥65 years in NPHO were also grouped together. Ethics approval was not applicable since we revised publicly available national surveillance data. No identifiable demographic or personal data were used in the present study. Statistical Analysis Data are presented as absolute numbers or as percentages. We used joinpoint regression modelling in order to assess the variation in the trends in vaccination rates, SARS-CoV-2 cases, COVID-19-related ICU admissions and COVID-19-related deaths. The joinpoint regression model investigates the combinations of trends that result in a statistically significantly better fit to a data series than a single-trend line fitted using Poisson regression or time-series models [15]. With this procedure, one may determine the number of joinpoints that are sufficient for the estimation of significant alterations in incidence trends over time. The Joinpoint Regression Program (3.5.2) and SPSS 20 were used to analyze the data. A statistically significant joinpoint was set at p < 0.05. Results The first vaccination efforts in Greece started on the 27 December 2020 with sparse vaccinations, whilst the nationwide vaccination campaign began on the 4 January 2021 (week 1) with the vaccination of healthcare workers, and was extended on the 19 January to persons aged ≥ 85 years. The vaccination campaign started while Greece was under a nationwide lockdown, which had begun on the 7 November 2020. Phased recession of the restriction measures occurred on the 11 January 2021 (the 2nd week of the vaccination campaign) with the opening of school facilities. During the study period, there were 278,821 new SARS-CoV-2 infections and 7401 COVID-19-related deaths. At the end of the study period, 3,367,673 full vaccinations occurred (19.49% in aged <70 years), which amounts to 30.68% of the Greek population. Table 1 presents the weekly distribution of vaccinations, new SARS-CoV-2 cases, ICU admissions due to COVID-19, COVID-19-related deaths and the ratio of COVID-19related ICU admissions/SARS-CoV-2 cases and the ratio of COVID-19-related deaths/SARS-CoV-2 cases. The course of vaccinations by age group from week 1 to week 25 is presented in Figure 1. According to the results of the joinpoint analysis, for the age group of 18-24 years, there was a significant increase in the rate of vaccinations by 9.63% (Cis: 4.7-14.8) from week 2 to week 19, which was followed by a significant increase by 148.21% (Cis: 46.5-320.5) from week 20 to week 22. The rate decreased significantly by 31.96% (Cis: −51.6-−4.3) from week 23 to week 25 ( Figure 1). For subjects 25-49 years, we observed a significant decrease in the rate of vaccinations by 9.91% (Cis: −17.5-−1.6) from week 2 to week 13, which was followed by a significant increase by 37.79% (Cis: 24.8-52.2) from week 14 to week 22, while during weeks 23-25 the rate of vaccinations did not significantly differ. For the age group 50-59 years, vaccination displayed a significantly decreasing trend from week 2 to week 12 by 17.29% (Cis: −30.5-−1.6). The rate significantly increased by 58.12% from week 13 to 22 (Cis: 43.3-74.4) and significantly decreased by 43.72% (Cis: −67.5-−2.6) from week 23 to 25. For the age group 60-69 years, we did not observe statistically significant changes in vaccination rates from week 2 to 13 and from week 19 to 25, while the rate significantly increased by 164.52% (Cis: 26.6-452.9) for week 14 to 18. For the persons aged 70-79 years, we observed a significant increase of 30.08% (Cis: 9.6-54.4) for the weeks 2 to 17 and a significant decrease of 20.57% (Cis: −33.1-−5.6) for the weeks 18 to 25. Finally, for persons aged ≥80 years, there was a significantly decreasing trend in the vaccination rate from week 2 to week 25 by 9.3% (Cis: −13.0-−5.4). Weekly trends in vaccinations in persons aged ≥70 years in comparison with those <70 years are presented in Figure 2. For subjects aged <70 years, we did not observe significant differences in the vaccination rate from week 2 to week 6, while for week 7 to week 15, we observed a significant increase in the vaccination rate by 7.03% (Cis: 3.8-10.4), which was followed by a significant increase of 33.07% (Cis: 29.7-36.6) for week 16 to 22 and a significant increase of 13.61% (Cis: 4.8-23.1) for week 23 to 25 ( Figure 2). For subjects ≥70 years, vaccination rates were not statistically significantly different between week 2 and week 5. The vaccination rate significantly increased by 46.49% (Cis: 5-104.4) for week 6 to week 8, by 14.95% (Cis: 13-16.9) for week 9 to 18 and by 3.92% (Cis: 1.8-6) for week 19 to 25. from week 23 to 25. For the age group 60-69 years, we did not observe statistically significant changes in vaccination rates from week 2 to 13 and from week 19 to 25, while the rate significantly increased by 164.52% (Cis: 26.6-452.9) for week 14 to 18. For the persons aged 70-79 years, we observed a significant increase of 30.08% (Cis: 9.6-54.4) for the weeks 2 to 17 and a significant decrease of 20.57% (Cis: −33.1-−5.6) for the weeks 18 to 25. Finally, for persons aged ≥80 years, there was a significantly decreasing trend in the vaccination rate from week 2 to week 25 by 9.3% (Cis: −13.0-−5.4). Weekly trends in vaccinations in persons aged ≥70 years in comparison with those <70 years are presented in Figure 2. For subjects aged <70 years, we did not observe significant differences in the vaccination rate from week 2 to week 6, while for week 7 to week 15, we observed a significant increase in the vaccination rate by 7.03% (Cis: 3.8-10.4), which was followed by a significant increase of 33.07% (Cis: 29.7-36.6) for week 16 As vaccination coverage accumulated nationwide, we observed that the weekly mean value of SARS-CoV-2 cases, COVID-19 ICU patients and COVID-19 deaths markedly declined (Figure 3). Figure 4 presents the results of joinpoint analysis for the trends in the rate of vaccinations, SARS-CoV-2 cases and COVID-19-related ICU admissions and deaths. In more detail, for all ages, the vaccination rate significantly increased from week 2 to week 6 by 85.70% (Cis: 28.1-169.1), and from week 7 to week 25 by 15.65% (Cis: 14.9-16.4). The rate of SARS-CoV-2 cases increased significantly from week 2 to week 13 by As vaccination coverage accumulated nationwide, we observed that the weekly mean value of SARS-CoV-2 cases, COVID-19 ICU patients and COVID-19 deaths markedly declined ( Figure 3). Figure 4 presents the results of joinpoint analysis for the trends in the Vaccines 2022, 10, 337 6 of 12 rate of vaccinations, SARS-CoV-2 cases and COVID-19-related ICU admissions and deaths. In more detail, for all ages, the vaccination rate significantly increased from week 2 to week 6 by 85.70% (Cis: 28.1-169.1), and from week 7 to week 25 by 15.65% (Cis: 14.9-16.4). The rate of SARS-CoV-2 cases increased significantly from week 2 to week 13 by 16.15% (Cis: 11.1-21.5), while from week 14 to week 25 the rate decreased significantly by 13.50% (Cis: −17.7-−9.1). For the whole study duration, the rate of SARS-CoV-2 cases remained stable with a marginal difference of 0.9% (Cis: −2.2-4.4). For ICU admissions related to COVID-19, the rate decreased significantly by 7.41% (Cis: −13.5-−0.9) from week 2 to week 4, followed by a significant increase of 17.22% (Cis: 15.1-19.4) from week 5 to week 11. The rate of ICU admissions remained stable during week 12 to week 16 (Cis: −0.7-4.6). The ICU admissions rate decreased significantly from week 17 to week 20 by 11.99% (Cis: −15.9-−7.9) and from week 21 to week 25 by 16.77% (Cis: −20.2-−13.2). For the whole study period, the rate of ICU admissions displayed a small reduction by 1.2% (−2.6-0.2), which did not reach statistical significance. Finally, the rate of COVID-19-related deaths increased significantly from week 2 to week 15 by 12.08% (Cis: 8. Figure 5 presents the average of vaccinations, weekly SARS-CoV-2 cases, ICU admissions and deaths for subjects <70 years. In this age group, the rate of SARS-CoV-2 cases increased statistically significantly by 16.18% from week 2 to week 13 (Cis: 10.8-21.8) followed by a statistically significant decrease of 12.59% (Cis: −16.9-−8.1) from week 14 to week 25. ICU admissions decreased statistically significantly by 10.53% (Cis: −18.4-−1.9) from week 2 to week 4, which was followed by a statistically significant increase of 20.39% (Cis:16.2-24.7) from week 5 to week 10 and a statistically significant increase of 6.1% (Cis: 0.5-12.1) from week 11 to week 14. For weeks 15 to 18, we did not observe a significant change in SARS-CoV-2 ICU admissions but there was a decrease of 16.09% between week 19 and the end of the study period. For COVID-19 deaths, we observed a statistically significant increase by 10.64% (Cis: 7.2-14.2) from week 2 to week 17, which was followed by a statistically significant decrease by 22.03% (Cis: −31-−11.9) from week 18 to week 25. Figure 5 presents the average of vaccinations, weekly SARS-CoV-2 cases, ICU admissions and deaths for subjects <70 years. In this age group, the rate of SARS-CoV-2 cases 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 Vaccinations/100,000 SARS-CoV-2 cases/100,000 Deaths/100,000 ICU admissions/100,000 from week 2 to week 4, which was followed by a statistically significant increase of 20.39% (Cis:16.2-24.7) from week 5 to week 10 and a statistically significant increase of 6.1% (Cis: 0.5-12.1) from week 11 to week 14. For weeks 15 to 18, we did not observe a significant change in SARS-CoV-2 ICU admissions but there was a decrease of 16.09% between week 19 and the end of the study period. For COVID-19 deaths, we observed a statistically significant increase by 10.64% (Cis: 7.2-14.2) from week 2 to week 17, which was followed by a statistically significant decrease by 22.03% (Cis: −31-−11.9) from week 18 to week 25. Additionally, we analyzed COVID-19-related ICU admissions and death trends according to age and gender status. For COVID-19-related ICU admissions in the female population, for subjects 18-39 years old, we observed an increase by 143.9% (Cis: 25.5-374.0) from week 2 to week 4 and a further significant increase by 25.9% (Cis: 6.5-47.7) from week 8 to week 13. For weeks 14-25, we observed a significant decrease by 12.8% (Cis: −17.7-−7.7). The rate remained stable for weeks 5-7. For females aged 40-64 years old, we observed a significant increase in COVID-19-related ICU admissions by 8.0% (Cis: 4.2-11.9) for weeks 2-16, which was followed by a significant decrease by 13.7% (Cis: 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 Deaths <70 ICU admissions <70 Vaccinations <70 SARS-CoV-2 cases <70 For COVID-19-related deaths, in the female population aged 18-39 years, we did not observe significant differences during the study period. For female subjects 40-64 years, deaths were rather stable for weeks 2-5 while deaths increased by 21.0% (Cis: 8.9-34.5) for week 6 to week 15. For week 16 to week 25, we observed a significant decrease by 15.5% (Cis: −23.2-−7.1). For females older than 65 years, we observed stability of the death rate for week 2 to week 5. For week 6 to week 14 we observed a significant increase by 17.2% (Cis: 10.3-4.6), which was followed by a significant decrease of 14.6% (Cis: −18.6-−10.3) for weeks 15-25. In the male population, deaths for ages 18-39 years were not recorded during the study period. For ages 40-64 years, we observed a significant increase of 10.0% (Cis: 6.4-13.7) for weeks 2-16, which was followed by a decrease of 17.3% (Cis: −24.8-−9.0) for weeks 17-25. For males older than 65 years, we did not observe significant differences from week 2 to week 16, while for week 17 to week 25 we recorded a significant decrease in the death rate of 19.4% (Cis: −27.1-−10.9). Additionally, we analyzed COVID-19-related ICU admissions and death trends according to age and gender status. For COVID-19-related ICU admissions in the female population, for subjects 18-39 years old, we observed an increase by 143.9% (Cis: 25.5-374.0) from week 2 to week 4 and a further significant increase by 25.9% (Cis: 6.5-47.7) from week 8 to week 13. For weeks 14-25, we observed a significant decrease by 12.8% (Cis: −17.7-−7.7). The rate remained stable for weeks 5-7. For females aged 40-64 years old, we observed a significant increase in COVID-19-related ICU admissions by 8.0% (Cis: 4.2-11.9) for weeks 2-16, which was followed by a significant decrease by 13.7% (Cis: −21.4-−5.2) for weeks 17-25. For females older than 65 years old, we observed a significant increase in ICU admissions by 10.8% (Cis: 8.1-13.5) for weeks 2-15 and a significant decrease by 13.6% (Cis: −17.7-9.3) for weeks [16][17][18][19][20][21][22][23][24][25]. For COVID-19-related ICU admissions in the male population aged 18-39 years, we observed a significant decrease by 15.7% (Cis: −26.5-3.2) for weeks [17][18][19][20][21][22][23][24][25]. For this age group, for weeks 2-16, we recorded a not significant increase by 4.1% in ICU admissions. For males aged 40-64 years, we observed a not significant increase by 7.2% for weeks 2-16, which was followed by a significant decrease by 13.8% (Cis: −19-−8.4) for weeks 17-25. Finally, for males older than 65 years, we observed a significant increase of ICU admissions by 20.2% (Cis: 8.1-33.6) for weeks 7-10, which was followed by stability in the rate of admissions until week 16. For weeks 17-25, we recorded a significant decrease by 13.5% (Cis: −15.8-−11.2). Discussion For COVID-19-related deaths, in the female population aged 18-39 years, we did not observe significant differences during the study period. For female subjects 40-64 years, deaths were rather stable for weeks 2-5 while deaths increased by 21.0% (Cis: 8.9-34.5) for week 6 to week 15. For week 16 to week 25, we observed a significant decrease by 15.5% (Cis: −23.2-−7.1). For females older than 65 years, we observed stability of the death rate for week 2 to week 5. For week 6 to week 14 we observed a significant increase by 17.2% (Cis: 10.3-4.6), which was followed by a significant decrease of 14.6% (Cis: −18.6-−10.3) for weeks 15-25. In the male population, deaths for ages 18-39 years were not recorded during the study period. For ages 40-64 years, we observed a significant increase of 10.0% (Cis: 6.4-13.7) for weeks 2-16, which was followed by a decrease of 17.3% (Cis: −24.8-−9.0) for weeks 17-25. For males older than 65 years, we did not observe significant differences from week 2 to week 16, while for week 17 to week 25 we recorded a significant decrease in the death rate of 19.4% (Cis: −27.1-−10.9). Discussion The data from this nationwide observational study underline the beneficial impact of the national vaccination campaign in Greece. As the vaccinations accumulated, we observed a significant decrease in SARS-CoV-2 cases, ICU admissions and deaths. The decreases were evident sooner in subjects ≥70 years, who were vaccinated earlier than younger subjects. The declines in COVID-19 cases and outcomes occurred despite the fact that Greece was in a phase of reopening (following a national lockdown) since the 2nd week of the vaccination campaign. The COVID-19 vaccines have shown effectiveness against the disease in randomized clinical trials and are now widely used in national vaccination campaigns worldwide. The BNT162b2 vaccine (Pfizer BioNTech) has 95% efficacy against COVID-19 [5]. Vaccine effectiveness against symptomatic COVID-19 was estimated at 70.4% for the ChAdOx1 nCoV-19 vaccine (AZD1222, Astra Zeneca) [7] and 94.1% for the mRNA-1273 SARS-CoV-2 vaccine (Moderna) [6]. Although randomized clinical trials provide important information regarding vaccine effectiveness, their population may have differences from the general population and thus it is essential to examine and report real-world effectiveness. Fukutami et al. [16] have used a public access COVID-19 database alongside a cases, vaccinations and COVID-19 (CaVaCo) tool to assess the efficiency of SARS-CoV-2 vaccination worldwide. The authors reported heterogeneity in the effects of vaccination across countries, with the majority of them exhibiting a positive correlation between COVID-19 vaccination, new SARS-CoV-2 cases and COVID-19 deaths. In Greece, we observed that as cumulative vaccination coverage increased, the weekly average incidence of SARS-CoV-2 cases and COVID-19 deaths decreased, after the 16th week of vaccination, despite the fact that the country was undertaking a phased reopening (that started at the 2nd week of the vaccination campaign) following the nationwide lockdown. Our national population-level study adds important data on the effect of the vaccination campaign among the Greek population and provides insights into the real-life effects of vaccines in reducing the rates of SARS-CoV-2 cases and severe COVID-19 outcomes (ICU admissions and deaths). We observed marked declines in SARS-CoV-2 incidence (as suggested by new SARS-CoV-2 cases) and COVID-19 outcomes (i.e., ICU admissions and deaths) corresponding to increased vaccine uptake by the general population. Similar observations have been reported by Haas et al. in Israel [17], where COVID-19 vaccination proved effective in reducing symptomatic and asymptomatic SARS-CoV-2 infections, COVID-19-related severe and non-severe hospitalizations and COVID-19-related deaths. In a large community surveillance study in England, COVID-19 vaccination resulted in reductions in SARS-CoV-2 infections of 79% after the ChAdOx1 vaccine and of 80% after the BNT162b2 vaccine [18]. Similarly, in a retrospective study performed in an Italian province, the effectiveness of COVID-19 vaccination was estimated at 95% for the prevention of SARS-CoV-2 infections or COVID-19-related deaths [19]. In the same context, the real-world effectiveness of the vaccines against symptomatic disease or severe COVID-19 has been reported in elderly subjects [20]. Our results are in accordance with the aforementioned findings and provide further support for the impact of COVID-19 vaccination. In our study, the reductions in SARS-CoV-2 cases, ICU admissions and deaths occurred despite the fact that the country was in a phased reopening following the implementation of a nationwide lockdown of approximately 2 months' duration. Importantly, the COVID-19-related outcomes remained low even after the reopening had occurred, suggesting a positive impact of the COVID-19 vaccination campaign on public health. Herd immunity occurs when a large percentage of the population is immune, resulting in decreased spread of the disease from person to person and thus protection of the whole community rather than immune subjects only. Historically, herd immunity was thought to be reached when approximately 65-70% of the population has been immunized [21]. We observed a declining trend in new SARS-CoV-2 cases starting in the 14th week of the vaccination campaign when approximately 6.1% of the population was vaccinated. One study has reported that for some countries the infection rate after the vaccination campaign has an inverted U-shaped trend that is characterized by an increasing rate of infection after vaccination starts, which reaches a peak then declines as vaccinations accumulate [21]. Our results are in accordance with the aforementioned study and suggest that in some countries, presumably those that are underpopulated, like Greece, partial herd immunity may be reached earlier, and that the nationwide vaccination campaign should be intensive so as to quickly reach the turning point and prevent SARS-CoV-2 resurgence. The emergence of SARS-CoV-2 variants with increased transmissibility may lead to higher herd immunity thresholds, and efforts should be made to increase vaccine uptake in order to reduce SARS-CoV-2 transmission and morbidity [17]. Vaccine hesitancy results in a delay or refusal of vaccination despite vaccine availability and, in the COVID-19 era, it has emerged as a growing global threat to public health [22]. Although some populations, such as health care workers, have shown high acceptance of COVID-19 vaccination, other groups are more hesitant [23,24]. Safety concerns, doubts about the efficacy of the available vaccines and misinformation about the virus are some of the reasons underlying COVID-19 vaccine hesitancy, which may result in slower vaccination rates [25]. A study that was conducted in the USA presents statistically significant differences in vaccine hesitancy based on sociodemographic characteristics, with the highest prevalence of COVID-19 vaccine hesitancy found among African Americans, Hispanics, those who had children at home, individuals with lower education and incomes and rural dwellers [26]. Our study is not without limitations. We report reduced rates of SARS-CoV-2 cases, ICU admissions and deaths coinciding with the accumulation of vaccinations, but one cannot rule out the effect of potential cofounders and the positive contribution of other factors. One may consider the analysis of national surveillance data a limitation. The analysis of publicly available data is a common research method that may help answer research questions concerning global (or, in our case, national) responses to the novel coronavirus. The ecological design of our study cannot discriminate the impact of non-pharmaceutical interventions; however, it is notable that despite the reopening of the national lockdown we observed a significant drop in SARS-CoV-2 cases, ICU admissions and deaths as the vaccinations accumulated. Unfortunately, we do not have data on the vaccination status of, or the vaccines administered to, the patients with COVID-19, which could provide direct data regarding the effectiveness of the vaccines. Additionally, we do not have available demographic characteristics (age and gender) of the COVID-19 patients or the vaccinated subjects. Therefore, we cannot perform multivariate analysis to test for the possible effects of gender and age on COVID-19-related outcomes. We must also acknowledge that a significant limitation of the present study is the fact that the comparison of the trend of vaccinations with COVID-19 outcomes according to age relied on the grouping of age groups that were not identical, due to differences in reporting of data between the NPHO and ECDC that might have resulted in misclassification. Conclusions In conclusion, we observed a temporal association between vaccine uptake and reductions in the rate of SARS-CoV-2 cases, COVID-19-related ICU admissions and COVID-19related deaths. Our data suggest that high vaccine uptake may represent an efficient route towards normality, offering control of the SARS-CoV-2 pandemic as well as reducing the morbidity and mortality of COVID-19. Institutional Review Board Statement: Ethical review and approval were waived for this study because it revised publicly available national surveillance data. Informed Consent Statement: Patient consent was waived because we revised publicly available national surveillance data. No identifiable demographic or personal data were used in the present study. Data Availability Statement: Data are available upon request. Conflicts of Interest: The authors declare no conflict of interest.
2022-02-24T16:20:46.294Z
2022-02-01T00:00:00.000
{ "year": 2022, "sha1": "bbee7d8e1395a23cab54051196a1259e6829a118", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-393X/10/2/337/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fa7df566d468b468713632fff3c13e51d8bb33c2", "s2fieldsofstudy": [ "Medicine", "Mathematics" ], "extfieldsofstudy": [ "Medicine" ] }
3385650
pes2o/s2orc
v3-fos-license
Ethyl pyruvate inhibits oxidation of LDL in vitro and attenuates oxLDL toxicity in EA.hy926 cells Background Ethyl pyruvate (EP) exerts anti-inflammatory and anti-oxidative properties. The aim of our study was to investigate whether EP is capable of inhibiting the oxidation of LDL, a crucial step in atherogenesis. Additionally, we examined whether EP attenuates the cytotoxic effects of highly oxidized LDL in the human vascular endothelial cell line EA.hy926. Methods Native LDL (nLDL) was oxidized using Cu2+ ions in the presence of increasing amounts of EP. The degree of LDL oxidation was quantified by measuring lipid hydroperoxide (LPO) and malondialdehyde (MDA) concentrations, relative electrophoretic mobilities (REMs), and oxidation-specific immune epitopes. The cytotoxicity of these oxLDLs on EA.hy926 cells was assessed by measuring cell viability and superoxide levels. Furthermore, the cytotoxicity of highly oxidized LDL on EA.hy926 cells under increasing concentrations of EP in the media was assessed including measurements of high energy phosphates (ATP). Results Oxidation of nLDL using Cu2+ ions was remarkably inhibited by EP in a concentration-dependent manner, reflected by decreased levels of LPO, MDA, REM, oxidation-specific epitopes, and diminished cytotoxicity of the obtained oxLDLs in EA.hy926 cells. Furthermore, the cytotoxicity of highly oxidized LDL on EA.hy926 cells was remarkably attenuated by EP added to the media in a concentration-dependent manner reflected by a decrease in superoxide and an increase in viability and ATP levels. Conclusions EP has the potential for an anti-atherosclerotic drug by attenuating both, the oxidation of LDL and the cytotoxic effect of (already formed) oxLDL in EA.hy926 cells. Chronic administration of EP might be beneficial to impede the development of atherosclerotic lesions. Introduction Oxidation of low-density lipoprotein (LDL) is a central element in the development of atherosclerosis [1]. LDL in its native state (nLDL) is not atherogenic. However, in the subendothelial space of arterial sites, nLDL can become subject to oxidation by mechanisms involving free radicals and/or lipoxygenases [2]. The resulting oxidized form of nLDL, oxLDL, contains, i.a., malondialdehyde (MDA) and 4-hydroxynonenal (HNE), which have been shown to exert prominent cytotoxic effects on endothelial cells, a prerequisite for the pathogenesis of atherosclerosis [3,4]. Presumably, drugs capable of suppressing oxidation of LDL possess anti-atherosclerotic properties. Ethyl pyruvate (EP) is such a candidate [5]. Antioxidant action of EP has already been shown in vivo using animal models [6]. For example, Tawadrous et al. have shown that EP is capable of suppressing lipid peroxidation: Treatment with EP attenuated hepatic MDA formation in rats subjected to oxidative stress [7]. It was the aim of our study to investigate whether EP is capable of suppressing the oxidation of LDL by using a well-established in vitro model. In the presence of increasing amounts of EP Cu 2+ ions were used to mediate LDL oxidation. The degree of oxidation of the lipid part of the LDL particle was assessed by measuring lipid hydroperoxide (LPO) as well as MDA concentrations. Oxidation of the lipid part of LDL has been shown to be followed by modification of apolipoprotein B (apoB), the protein part of LDL [2]. We, therefore, also assessed the degree of apoB modification by measuring relative electrophoretic mobilities (REMs), and by quantifying oxidation-specific immune epitopes using a fluorescent immunoassay and specific antibodies against oxLDL [8,9]. Furthermore, we assessed the cytotoxicity of oxLDL obtained by oxidation of nLDL in the presence of various amounts of EP. For this purpose human vascular endothelial EA.hy926 cells were incubated with the respective oxLDLs and cellular viability was examined by means of a standard test (3-(4,5-dimethyl-2-thiazolyl)-2,5-diphenyl-2H-tetrazolium bromide (MTT) assay, [10]). As a marker of oxidative stress, cellular superoxide levels were measured by high-performance liquid chromatography (HPLC) [11] using a method based on the reduction of dihydroethidium (DHE) to 2-hydroxyethidium by superoxide (O 2 •─ ). Moreover, mitochondrial function was monitored by measuring intracellular high energy phosphates using HPLC. Additionally, we investigated whether EP is capable of attenuating the cytotoxic effect of already oxidized LDL on endothelial cells. To test this hypothesis, EA.hy926 cells were incubated with highly oxidized LDL in the presence of increasing amounts of EP and the respective viabilities and superoxide and ATP levels were measured. Preparation of LDL The study was approved by the appropriate institutional review board (ethics committee of the Medical University of Graz; 27-320 ex 14/15) and written informed consent was obtained. Human LDL (1.020 to 1.063 g/mL) was obtained from the plasma of normolipemic (Lp(a) < 5 mg/dL), fasting (12 to 14 h) male donors (a total of 7 healthy volunteers aged between 29 and 44 years) by potassium bromide sequential ultracentrifugation [12]. Pefabloc (50 μM, Sigma-Aldrich, Vienna, Austria), butylated hydroxytoluene (20 μM, Sigma-Aldrich), and EDTA (1 g/ L, Merck, Darmstadt, Germany) were present during all steps of lipoprotein preparation to prevent lipid peroxidation and apoB cleavage by contaminating bacteria or proteinases. The samples were sterile-filtered and stored at 4˚C in the dark until use. The protein content of LDL was measured using the Lowry method [13]. Total cholesterol of the isolated LDL was determined enzymatically with the CHOD-iodide test kit (Boehringer-Mannheim, Germany). Determination of LPOs The amount of LPO generated during LDL oxidation was determined with a spectrophotometric assay for lipid hydroperoxides in serum lipoproteins [14]. In principle, lipid peroxides are capable of converting iodide to iodine. Briefly, 50 μL of LDL-solution (containing 1.5 mg/mL of total LDL) was mixed on a vortex mixer with 500 μL of a colour reagent taken from the commercially available CHOD-PAP test kit. The samples were allowed to stand for 30 min at ambient temperature. The absorbance was measured at 365 nm; 50 μL of PBS in colour reagent served as blank. The concentration was calculated using the molar absorptivity of I 3 -(ε = 2.46 x 10 4 M -1 cm -1 ). Calibration curves obtained with different peroxides such as H 2 O 2 , t-butyl hydroperoxide, and cumene hydroperoxide gave values for ε of 2.45 ± 0.04, 2.34 ± 0.26, and 1,26 ± 0.15 x 10 4 M -1 cm -1 , respectively. A stoichiometric relationship (slope = 1.02) was observed between the amount of organic peroxides assayed and the concentration of I 3 produced. Determination of REM The electrophoretic runs were performed on agarose gel (1%) and the lipoproteins were precipitated on the gel with phosphotungstate-Mg 2+ reagent. Electrophoresis was performed in 0.05 M barbital buffer at 100 V for 50 min. REM was defined as the ratio of the migration distance of oxLDL to that of nLDL. Determination of oxidation-specific immune epitopes The formation of oxidation-induced epitopes on apoB was recorded with monoclonal antibodies raised against modified apoB by means of a solid phase dissociation-enhanced lanthanide fluorescence immunoassay (DELFIA 1 ) as described previously [8]. Anti-apoB (a rabbit polyclonal antibody purchased from Behring (Marburg, Germany)) and anti-ox-apoB (OB/04) (a monoclonal antibody raised against copper-oxidized LDL which was characterized to react specifically with oxidized apoB-containing lipoproteins, [8]) were used as detecting antibodies. After washing the plates three times, Eu 3+ -labeled rabbit anti-mouse IgG (for OB/04) or Eu 3+labeled sheep anti-rabbit IgG (for anti-apoB) were used as reporting antibodies by incubation for 1 h at 25˚C. After washing and addition of the enhancement solution (Wallac Oy), fluorescence was measured. Malondialdehyde analysis MDA was determined according to a previously described HPLC method after derivatization with 2,4-dinitrophenylhydrazine (DNPH) [15]. For alkaline hydrolysis of protein-bound MDA 25 μL of 6 mol/L sodium hydroxide was added to 0.125 mL of each fraction obtained at 2 h, 4 h, 6 h, as well as 8 h of Cu 2+ -induced nLDL oxidation (in 1.5 mL tubes) and incubated at 60˚C for 30 min in an Eppendorf heater. The hydrolyzed samples were deproteinized with 62.5 μL 35% (v/v) perchloric acid and after centrifugation (14,000 g; 2 min) 125 μL of the supernatant were mixed with 12.5 μL DNPH solution and incubated for 10 min. This reaction mixture, diluted derivatized standard solutions (0.625-10.000 nmol/mL), and reagent blanks were injected into the HPLC system (injection volume: 40 μL). The MDA standard was prepared as previously described [16]. The DNPH derivatives (hydrazones) were separated isocratically on a 5 μm ODS hypersil column (150 x 4.6 mm) guarded by a 5 μm ODS hypersil column (10 x 4.6 mm; Uniguard holder; Thermo Electron Corporation, Cheshire, UK) with a mobile phase consisting of a 0.2% (v/v) acetic acid solution (bidistilled water) containing 50% acetonitrile (v/v). The HPLC separations were performed with an L-2200 autosampler, L-2130 HTA pump, and L-2450 diode array detector (all: VWR Hitachi, Vienna, Austria). Detector signals (absorbance at 310 nm) were recorded and the EZchrom Elite software (VWR International) was used for data acquisition and analysis. Cell culture EA.hy926 cells were obtained from the American type culture collection (ATCC) and were a kind gift of Dr. C.J.S. Edgell (University of North Carolina, Chapel Hill, NC, USA) [17]. EA.hy926 is a permanent cell line that was established by fusing primary human umbilical vein cells with a thioguanine-resistant clone of the human A549 cell line [17]. EA.hy926 cells display characteristic features of primary endothelial cells such as the expression of von Willebrand factor (Factor VIII-related antigen) and synthesis of Weibel-Palade bodies, the exhibition of angiogenesis [18], and the involvement in coagulation, fibrinolysis [19], and inflammation [20]. For serum starvation (performed overnight) and during cell culture experiments EA.hy926 cells were incubated in serum-free DMEM containing 1 g/L glucose, 3.97 mM L-glutamine, and 1 mM sodium pyruvate supplemented with 100 U/mL penicillin and 100 μg/mL streptomycin at 37˚C, 5% CO 2 . nLDL and Cu 2+ -oxidized LDL (oxLDL) were prepared as 1.5 mg/mL stock solutions in PBS and were applied to EA.hy926 cells at a concentration of 0.3 mg/mL for the indicated time periods in serum-free culture medium. In some experiments serum-starved cells were preincubated with ethyl pyruvate (EP) at concentrations of 250, 500, and 1000 μg/mL (final concentrations) for 45 min at 37˚C before addition of oxLDL. Electrical cell-substrate impedance sensing (ECIS) To investigate the effects of nLDL and Cu 2+ -oxidized LDL on EA.hy926 barrier function and monolayer integrity, impedance monitoring was performed using an ECIS Z system (Applied Biophysics, troy, NY, USA). EA.hy926 cells were plated on gold microelectrodes of 8W10E + arrays, grown to confluence, and then treated with the respective lipoproteins (in the absence or presence of EP) at concentrations of 0.3 mg/ml. After baseline stabilization (20 min after addition of the lipoproteins) impedance was recorded in real time at 4 kHz (barrier function) and 64 kHz (monolayer integrity) for 22 h. Cell viability (MTT test) Cellular viability was assayed using the MTT assay which measures the metabolic activity of living cells. Cells plated in 12 or 24 well plates were grown to confluence and then treated with the respective lipoproteins (in the absence or presence of EP) at concentrations of 0.3 mg/mL for the indicated time periods. MTT (1.2 mM in serum-free medium) was added to cells and incubated for 2 h at 37˚C under standard conditions. Cells were washed with PBS and cell lysis was performed with isopropanol/1 M HCl (24:1; v/v) on a microplate shaker at 1200 rpm for 15 min. Absorbance was measured at 570 nm on a Power Wave X Select microplate spectrophotometer (BioTek Germany, Bad Friedrichshall, Germany) and corrected for background absorption (650 nm). Measurement of superoxide by HPLC Cells plated in 12 or 24 well plates were grown to confluence and then treated with the respective lipoproteins (in the absence or presence of EP) at concentrations of 0.3 mg/mL for the indicated time periods. After the incubation time, the medium was aspirated and the cells were washed with HBSS containing 100 μM diethylenetriaminepentaacetic acid (HBSS/ DTPA). All subsequent steps concerning addition of DHE and extraction of 2-hydroxyethidium (2-OH-E + ) have in principle been described by Laurindo et al. [21]. In brief: DHE (50 μmol/L DHE in HBSS/DTPA, freshly prepared) was added to each well and the plates were incubated at 37˚C in the dark for 30 min. DHE was aspirated and cells were washed with HBSS/DTPA. One mL of trypsin solution (1:10 dilution in PBS) was then added to the cells and the plates were incubated at 37˚C for 5 min. After detaching the cells were transferred into 1.5 mL Eppendorf tubes. All subsequent steps were performed on ice. The cell suspension was centrifuged at 1500 rpm at 4˚C for 5 min. The supernatant was discarded and 200 μL of acetonitril was added to the cell pellets for lysis with additional treatment by vortexing and ultrasonication in a cooled ultrasonic bath for approx. 10 min. After centrifugation at 12,000 rpm at 4˚C for 10 min the supernatant was dried under vacuum. The pellets after acetonitrile extraction were dissolved in 200 μL of NaOH (0.1 mol/L) for determination of protein concentration using the BCA Protein Assay (Pierce). Dried supernatants (stored at -20˚C up to 2 weeks) were dissolved in PBS/DTPA prior to HPLC analysis. HPLC analysis was performed in principle according to a method described by Zielonka et al. [11]. In brief, separation of DHE-derived products was performed on a Kromasil column (5 μm, 250 mm × 4.6 mm I.D. equipped with a 10 x 4 mm precolumn) using an L-2200 autosampler and two L-2130 HTA pumps. A linear gradient from 10% acetonitrile to 70% acetonitrile (containing 0.1% trifluoroacetic acid (TFA) as well as 0.1% TFA in the aqueous solution) was performed at a flow rate of 0.7 mL/min within 46 min. Injection volume was 40 μL. Detection of the DHE-derived products was performed with an L-2480 fluorescent detector (DHE: excitation 358 nm, emission 440 nm; 2-OH-E + and E + : excitation 510 nm, emission 595 nm) and an L-2450 diode array detector (all: VWR Hitachi)). Detector signals were recorded and the EZchrom Elite software (VWR International) was used for data requisition and analysis. Measurement of the cellular high energy content by HPLC Cells plated in 12 or 24 well plates were grown to confluence and then treated with the respective lipoproteins (in the absence or presence of EP) at concentrations of 0.3 mg/mL for the indicated time periods. After the incubation time the cells were washed with PBS, trysinized, and transferred into 1.5 mL Eppendorf tubes. After centrifugation the supernatant was discarded and the cells were deproteinized with 200 μL of 0.4 mol/L perchloric acid. After centrifugation 150 μL of the acid extract was neutralized with 15-20 μL of 2 mol/L potassium carbonate (4˚C). The supernatant obtained after centrifugation was used for HPLC analysis (injection volume: 40 μL). The pellets of the acid extracts were dissolved in 0.5-1.0 mL of 0.1 mol/L sodium hydroxide solution and used for protein determination using the BCA assay. The HPLC analytical method for separation of the high energy phosphates has been reported previously [22]. Alterations in brief: Separation was performed on a Hypersil ODS column (5 μm, 250 mm x 4 mm I.D.) using a L-2200 autosampler, two L-2130 HTA pumps, and a L-2450 diode array detector (all: VWR Hitachi). Detector signals (absorbance at 214 nm and 254 nm) were recorded with a personal computer by means of EZchrom Elite software (Scientific Software Inc., San Ramon, CA USA). In addition to the obtained nucleotide concentrations energy charge (EC) of the cells was calculated employing the following formula: EC = (ATP + 0.5 ADP)/(AMP + ADP + ATP). Inverted light microscopy Cells plated in 12 well plates were grown to confluence and then treated with the respective lipoproteins (in the absence or presence of EP) at concentrations of 0.3 mg/mL for the indicated time periods. After washing with serum-free culture medium cells were applied to inverted light microscopy. Images were acquired on an Olympus IX70 Inverted System Microscope (Olympus Optical Co, Hamburg, Germany) equipped with an Olympus FireWire CVIII camera (Olympus Soft Imaging Solutions GmbH, Muenster, Germany). The Cell B software (Olympus Soft Imaging Solutions GmbH) was used for data acquisition and processing. Statistics One-way ANOVA and Bonferroni post tests were used for statistical evaluation of the effects of increasing amounts of EP on indicators of LDL oxidation, cell viability, superoxide anion, and ATP levels. Correlations between EP concentrations and REM were calculated by Pearson's coefficient of correlation. Statistical significance was set at p < 0.05. Ã . . . p 0.05, ÃÃ . . . p 0.01, ÃÃÃ . . . p 0.001. EP inhibits nLDL oxidation induced by Cu 2+ ions in vitro concentrationdependently EP efficiently suppressed the oxidation of the lipid part of the LDL particle. Formation of LPOs as well as of MDA were found to be significantly decreased when nLDL was oxidized in the presence of increasing amounts of EP (Fig 1A and Fig 1B). Correspondingly, EP also suppressed the oxidative modification of apoB, the protein part of the LDL particle: REM values as well as the amount of oxidation-specific epitopes were concentration-dependently decreased when nLDL was oxidized in the presence of increasing amounts of EP (Fig 1C and Fig 1D). The cytotoxicity of oxLDL in EA.hy926 cells is decreased when formed in the presence of EP Microscopic imaging showed loss of monolayer integrity and cell detachment when EA.hy926 cells were treated with oxLDL, formed in the absence of EP (highly oxidized LDL) compared to cells treated with nLDL (Fig 2A, panel a vs. panel b) which is indicative for cellular dysfunction and suggests cell death. The presence of 500 and 1000 μg/mL EP during LDL oxidation mitigated the cytotoxic effects of oxLDL which then did not impact on the EA.hy926 monolayer (Fig 2A, panels c and d). To investigate monolayer integrity and barrier function in more detail ECIS measurements were performed. Results revealed that ox-LDL treatment reduced both, barrier function and monolayer integrity of EA.hy926 cells (Fig 2B). LDL oxidized in the presence of 500 μg/ml EP was almost without effect on barrier function and monolayer integrity and LDL oxidized in the presence of 1000 μg/ml EP as well as nLDL even slightly increased both parameters. To investigate cellular viability in more detail the MTT assay was used. Results obtained indicated a significantly reduced viability of EA.hy926 cells treated with highly oxidized LDL (53% of nLDL treatment; Fig 3A). Concomitantly, intracellular superoxide levels were elevated more than 2-fold (Fig 3B), indicating massive oxidative stress in oxLDL-treated EA.hy926 cells. When oxidation of nLDL was performed in the presence of EP (500 or 1000 μg/mL, respectively), the cytotoxicity of the thus formed oxLDLs decreased, reflected in restored cell viability and superoxide levels which were comparable to cells treated with nLDL ( Fig 3A and Fig 3B). In good agreement, measurements of intracellular high energy phosphates also revealed increased mitochondrial function in cells that were treated with oxLDLs where EP was present during LDL oxidation (Fig 4A and 4B). Increasing concentrations of EP in the culture medium reduce the cytotoxic effects of highly oxidized LDL in EA.hy926 cells nLDL was oxidized by means of Cu 2+ ions in the absence of EP, resulting in formation of highly cytotoxic oxLDL as shown above. Subsequently, EA.hy926 cells were incubated with this highly oxidized form of LDL when EP (250, 500, or 1000 μg/mL, respectively) was present in the culture medium. In contrast to an intact cell monolayer of nLDL-treated EA.hy926 cells microscopic imaging again revealed loss of monolayer integrity and cell detachment of cells treated with highly oxidized LDL with EP absent in the culture medium ( Fig 5A, panel a vs. panel b). The presence of 500 μg/mL EP in the culture medium almost completely prevented cell detachment although it did not preserve an intact EA.hy926 monolayer (Fig 5A, panel c). In contrast, the presence of 1000 μg/mL EP completely restored monolayer integrity (Fig 5A, panel d). Results of ECIS measurements showed that EP at concentrations of 500 and 1000 μg/ml prevented barrier dysfunction and monolayer disintegration of EA.hy926 cell treated with ox-LDL when compared to cells that were incubated in the absence of EP (Fig 5B). Correspondingly, the viability of oxLDL-treated cells increased (Fig 6A) and superoxide radical anion formation decreased ( Fig 6B) the higher the concentration of EP in the culture medium during oxLDL treatment. In addition, measurements of intracellular high energy phosphates also revealed that augmented EP concentrations in the culture medium increased ATP levels in EA.hy926 cells. At a concentration of 500 or 1000 μg/mL of EP this increase in intracellular ATP became highly significant (Fig 7A). The calculated energy charge revealed a significant difference already at the lowest level of supplementation with EP ( Fig 7B). Discussion In the present study we show that EP is a potentially anti-atherosclerotic drug due to two modes of action. Mode 1: EP is capable of attenuating the oxidation of nLDL mediated by Cu 2+ ions; Mode 2: EP is capable of directly attenuating the cytotoxic effects of oxLDL in EA.hy926 cells. The EA.hy926 cell line used in this study is a fusion line between human umbilical vein endothelial cells and the A549 lung cancer cell line. Although it might be argued that this cell line is not representative of vascular EC (immortalization, genetic instability) this fusion hybrid has many conserved functions of endothelial cells such as expression of von Willebrand factor (Factor VIII-related antigen) and synthesis of Weibel-Palade bodies, the exhibition of angiogenesis [18], and the involvement in coagulation, fibrinolysis [19], and inflammation [20]. Regarding mode 1: Significantly lower levels of LPO, MDA, REM, and oxidation-specific epitopes as well as a decrease of cytotoxicity on EA.hy926 cells were found when nLDL was oxidized in the presence of increasing amounts of EP. To our knowledge, the underlying mechanisms by which EP attenuates oxidation of nLDL have not been investigated so far. Elucidation of the mechanisms by which EP attenuates Cu 2+ -mediated LDL oxidation is difficult since the chemical mechanisms by which copper ions oxidize LDL in vitro are still somewhat elusive. Binding of Cu 2+ ions to the LDL particle and subsequent reduction of Cu 2+ to Cu + appears to be a pivotal step [23]. Involvement of preformed lipid hydroperoxides in this reduction has been suggested [24,25] and an accelerating effect of α-tocopherol has been assumed [26]. However, Cu + ions are probably not in contact with the polyunsaturated fatty acids (PUFAs) inside the LDL particle and, therefore, cannot initiate lipid peroxidation directly. In aqueous solutions, it is known that Azobis(2-amidinopropane) dihydrochloride (AAPH) decomposes to amidinopropyl radicals, which, in the presence of oxygen, are rapidly converted into hydroperoxyl radicals [27,28]. Therefore, we oxidized nLDL with AAPH in the presence of increasing amounts of EP in a separate set of experiments. These data are presented as supplemental information (S1 and S2 Figs). We also found an attenuation of the AAPH-induced oxidation of nLDL by EP. Thus, this attenuated AAPH-mediated oxidation of nLDL in the presence of increasing amounts of EP gives rise to the suspicion that EP is capable of scavenging hydroperoxyl radicals, a mechanism which presumably also plays a role in the Cu 2+ -mediated oxidation of LDL. An alternative pathway might be that Cu + , bound on the surface of the LDL particle, is capable of generating superoxide, as shown by Burkitt and Duncan [29]. EP has been shown to be capable of scavenging superoxide [30]. This might be, at least in part, another underlying mechanism by which EP is capable of attenuating the Cu 2+ -mediated oxidation of nLDL. In addition, Burkitt and Duncan have shown that superoxide radical anion formation during Cu 2+ -mediated oxidation of nLDL subsequently leads to formation of hydroxyl radicals [29]. The rapid rate of reaction between the hydroxyl radical and the biomolecules on the surface of the LDL particle does not allow the hydroxyl radical to diffuse into the LDL particle more than 1 or 2 molecular diameters. Thus, hydroxyl radicals are very unlikely the initiators of the peroxidation of PUFAs in the core of the LDL particle. However, it has been shown that superoxide, under physiological conditions, partially becomes protonated to yield the hydroperoxyl radical [31]. The hydroperoxyl radical is the most stable lipid radical formed in vitro. Due to its uncharged nature, it is very likely capable to diffuse to the lipid rich region in the core of the LDL particle and initiate peroxidation of PUFAs. Results from our AAPH-mediated LDL oxidation experiments suggest that EP is an efficient scavenger of hydroperoxyl radicals. This may be an additional mechanism by which EP is capable of attenuating the Cu 2+ -mediated oxidation of nLDL, when hydroperoxyl radicals are formed. It cannot be ruled out that EP, after permeating the lipid rich region of LDL due to its lipophilic properties [32], might scavenge additional radicals generated during Cu 2+ -mediated oxidation of nLDL. Regarding mode 2: The protective effect of EP against oxLDL-induced EA.hy926 cell injuries is apparently attributable to its superoxide anion scavenging qualities. It is well known that oxLDL induces oxidative stress in vascular endothelial cells via multiple pathways. For example, oxLDL stimulates endothelial NADPH oxidase, leading to enhanced formation of superoxide, a potent initiator of cell proliferation [33,34]. Moreover, oxLDL causes uncoupling of endothelial NOS (eNOS), also leading to formation of superoxide rather than NO [35]. Our data clearly show that EP attenuates oxLDL-induced superoxide formation in EA.hy926 cells in a concentration-dependent manner, thereby enhancing cell viability. Results from numerous recent studies suggest that EP might be a pluripotent pharmacological agent due to its anti-inflammatory, anti-coagulant, and anti-oxidative properties. In various animal models of critical illness treatment with EP has been shown to improve survival and/or amelioration of organ dysfunction, e.g. I/R injury in stroke [16,36], I/R injury in an electrical burn model [37], brain injury [38], myocardial I/R injury [39], and whole body radiation-induced injury [40]. Anti-inflammatory and anti-oxidative action of EP has also been observed in models using human cells, e.g., in cultured Caco-2 transformed human intestinal epithelial cells [41], in HUVECs [42], in A549 human transformed pulmonary epithelial cells [43], and in A549 human alveolar epithelial cells [44]. Furthermore, EP administration has been shown to inhibit cancer growth in human gastric adenocarcinoma tissues [45]. However, disappointing results were obtained so far when EP was administered to human patients for short-term treatment. In a clinical trial of patients undergoing cardiac surgery, EP failed to improve the outcome, i.e. the inflammatory reactions usually accompanying this surgical treatment [46]. On the other hand, when administered over a prolonged period of time, EP is apparently capable of exerting anti-oxidative action. For example, chronic administration of EP has been shown to be capable of suppressing inflammatory bowel disease or to slow the rate of growth of malignant tumors [47,48]. With respect to the results of the present study, we suggest that chronic administration of EP might be a suitable tool to attenuate the formation of atherosclerotic plaques. Our data clearly indicate that EP is capable of attenuating the oxidation of both the lipid as well as the protein part of LDL, early and crucial steps in atherogenesis [1]. Our suggestion is supported by the findings of Xiao et al. who have recently reported that EP attenuates high mobility group box-B1 (HMGB-1) expression in mice macrophages. This attenuated expression is associated with a reduced atherosclerotic lesion size in vivo [49]. Moreover, EP has been shown to be capable of reducing vascular endothelial inflammation, an important mechanism involved in atherogenesis, by attenuating endoplasmatic reticulum stress [50]. S1 Fig. Effects of increasing amounts of EP on nLDL oxidation induced by AAPH in vitro. nLDL (1.5 mg/mL) was preincubated in the absence or presence of EP (500 and 1000 μg/mL) and then oxidized by addition of 10 mmol/L AAPH till LPO levels exceeded 100 nmol/mg LDL protein. (A) Representative microscopic images of EA.hy926 cells treated with highly or mildly AAPH-oxidized LDL. a) EA.hy926 cells incubated with nLDL; b) EA.hy926 cells incubated with AAPH-oxidized LDL which was formed in the absence of EP (= highly oxidized LDL); c) EA.hy926 cells incubated with AAPH-oxidized LDL which was formed in the presence of 500 μg/mL EP; d) EA.hy 926 cells incubated with oxLDL which was formed in the presence of 1000 μg/mL EP (= mildly oxidized LDL). (B) Cell viability was restored to nLDL values when AAPH oxidation was performed in the presence of EP. Data represent mean ± SD (n = 4), ÃÃÃ p < 0.001. (PDF) S2 Fig. Cytotoxic effects of highly AAPH-oxidized LDL on EA.hy926 cells in the presence of increasing medium concentrations of EP. nLDL (1.5 mg/mL) was oxidized by addition of 10 mmol/L AAPH in the absence of EP in order to obtain highly cytotoxic form of oxLDL. EP added to the culture media concentration-dependently attenuated the cytotoxic effect of this AAPH-oxidized LDL in EA.hy926 cells. (A) a) EA.hy926 cells incubated with nLDL; b) EA. hy926 cells incubated with highly AAPH-oxidized LDL in the absence of EP in the culture medium; c) EA.hy926 cells incubated with highly AAPH-oxidized LDL in the presence of 500 μg/mL EP in the culture medium; d) EA.hy926 cells incubated with highly AAPH-oxidized LDL in the presence of 1000 μg/mL EP in the culture medium. (B) Cell viability of AAPH-oxidized LDL-treated EA.hy926 cells concentration-dependently increased with EP present in the culture medium. Data represent mean ± SD (n = 4), Ã p < 0.05, ÃÃÃ p < 0.001. (PDF) S1 File. Data availability.xlsx.
2018-04-03T00:00:36.761Z
2018-01-25T00:00:00.000
{ "year": 2018, "sha1": "dd1be2529df85ce9289d8a3da7fedd95635e1591", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0191477&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "dd1be2529df85ce9289d8a3da7fedd95635e1591", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
237701567
pes2o/s2orc
v3-fos-license
Observation of the reactions between iron ore and metallurgical fluxes for the alternative ironmaking HIsarna process This work investigates the melting behaviour of iron ore with calcium-based fluxes, including lime, limestone and basic oxygen furnace (BOF) steelmaking slag. With an aim to explore the potential kinetic benefits that can be obtained through the formation of a liquid medium on the overall productivity of HIsarna. A HT-CLSM was used to rapidly heat and observation the interface between a given pair of chosen materials in-situ. Through this method, the rate at which the reaction progressed and the possible disruptive phenomenon that may occur was observed directly. The molten interface of fluxed materials is compared against each other offering insight into relative dissolution rate. Each flux shows promise of increasing ore fluidity to a varying degree with limestone generating above a 60% liquid fraction in the 1400°C reaction temperature. Introduction Technological advancement and greater understanding of the steel production process have contributed greatly to the improvement in raw material use efficiency in the modern steel industry. However, the nature of the integrated steel plant is still reliant heavily on the use of coal, coke and electricity. The steel industry contributes to 6.7% of anthropogenic CO 2 emissions globally [1], which means that CO 2 reduction in the steel industry cannot be ignored as a key step in meeting international climate control agreements. The European Union (EU) has made efforts with the aim of cutting the CO 2 emissions of industry by 80-95% by 2050 [2], however, the steel industry as a whole is estimated to be only 25-30% higher than the theoretical limit of energy consumption possible [1]. Hence, to reach the EU CO 2 emission reduction target, the European steel industry (and morally the wider community) must develop novel ironmaking processes to transform the industry. HIsarna [3][4][5], FINEX [6] and COREX [7] are all alternate ironmaking processes being developed to reduce CO 2 emissions, replacing the currently main CO 2 contributing step in production, the blast furnace (BF) ironmaking process. All three processes rely on the direct reduction or partial reduction of iron ore. In addition, direct reduced iron (DRI) is a key feedstock for electric arc furnace (EAF) practice in countries such as the U.S.A. and Turkey where there is an abundance of natural gas. Direct reduction requires a reductive gas atmosphere, usually CO from coal or natural gas to reduce the ore to iron. Pre-reduction partially reduces the ore before complete reduction of the ore in a hearth or smelting vessel. Reduction of the ore consists of multisteps shown in Equations (1)- (3). The reduction reactions in these novel ironmaking processes are conducted in solid-state. As in most cases, solid-state reactions are kinetically slower than liquid state reactions and are therefore potentially limiting to the iron production rate if these technologies are to be widely adopted. The rate of this reaction can either be increased through optimising conditions such as temperature, pressure and variant in iron source or through creating intermediate active species to encourage a fundamental step change in the rate-limiting step of the reaction. Flux from a metallurgical standing is a chemical cleaning agent or purifying agent, which is used in both extraction and joining of metals. A flux material added during smelting is bound to unwanted minerals to help remove them, forming slags [8]. As this is an important part of the iron-making and steelmaking processes, it has seen a large level of interest from the research community. Specifically, on the capability of CaO fluxes as this is an effective flux and already implemented in steelmaking processes due to its price, availability and effectiveness at removing key impurities such as silica and phosphorus [9]. Adding flux in HIsarna in the cyclone converter furnace (CCFthe location of solid-state pre-reduction in the process) with the addition of iron ore offers the normal production advantages such as removal of SiO 2 from the ore, but also offers early fluxing opportunity. Due to the mixing mechanics, the iron may form an early liquid slag inside the CCF, which should aid the reduction of potential accretions in the furnace, as the lower melting point will stop the solidification of the material at cool spots of the furnace. These can cause significant processing problems as pathways can be blocked, for instance, the most vital being that between the CCF and the smelting reduction vessel (SRVthe location of carbon fuel injection and full conversion to liquid hot metal). This paper will investigate interactions between CaO-bearing fluxes and iron oxide in iron ore. Lime, limestone and readily available BOF slag present a well-established set of CaO containing derivatives due to their current use and availability on a steel plant and the cross comparisons of cleaner and heavily mixed substances. The focus of this study is on each material's ability to encourage the presence of a liquid state within the ore at short interaction times. The phase diagram for CaO and FeO x can be found in the slag atlas based on the work from Phillips and Muan. [10,11] Shows that between 0 and 40% CaO has a strong influence on reducing the melting point of the iron species from around 1600°C to 1205°C at the eutectic. [10] The presence of a liquid species as discussed above has the potential to unlock the rate of reduction, with the mass transfer of species such as oxygen, phosphorus, and sulphur being orders of magnitude higher in liquid phase compared to solid diffusion. Experimental method Samples of iron ore were placed in contact with the chosen fluxing agent and heated to the designed temperature (1350-1450°C) to begin the reaction between the materials. During heating, the reaction between iron ore and fluxing agent is viewed in-situ using a High-Temperature Confocal Scanning Laser Microscope (HT-CSLM), and subsequently interrogated using post-analysis techniques such as Scanning Electron Microscopy (SEM) and Energy dispersive spectroscopy (EDS). The performance of each flux will be compared by looking at the liquid phase of the composition of the melt and the effects of liquid fraction and reaction mechanism discussed as to their potential of effect on reduction rates. Materials The iron ore, limestone and BOF slag were all sourced from Tata Steel IJmuiden Steel plant and compositions were provided. The iron ore used unsintered, un-pelleted and dried with 66% of iron content classified as a high-grade ore/high iron content. The lime used was purchased from Sigma-Aldrich, regent grade with a purity of 99%. The limestone contained 55% CaO and 37% CO 2 which is assumed to be combined in the form CaCO 3 . The lime used was purchased from Sigma-Aldrich, regent grade with a purity of 99%. The composition of all the materials used is given in Table 1. HT-CSLM The HT-CSLM is an in-situ observation tool consisting of a gold-coated elliptical chamber positioned below a UV imaging laser. The chamber has the ability to reach 1700°C at the heating rate of up to 700°C min −1 and cooling rates of up to 3000°C min −1 (in the higher temperature range). Within the chamber, a halogen bulb is located in one focal point, which emits IR radiation. The IR radiation is subsequently focused on to the sample which is positioned in the second focal point of the ellipse. The sample sits on an instrumented alumina stage, where an Rtype thermocouple is threaded through and attached to the bottom of a platinum ringthe location the sample sits upon. The atmosphere in the chamber can be controlled through the rotary vacuum pump extraction and a high-purity gas feed. Images of the confocal are presented in Figure 1 [12]. The furnace works in-situ with a UV laser microscope, which avoids image interference from the IR radiation from the bulb and hot sample. The microscope sits above the chamber and can look at the sample through a quartz window at the top of the chamber. The optics within a confocal microscope are designed to give a very narrow depth field of view, allowing the equipment to image surface roughness and texture in high detail. Experimental procedure An alumina crucible was half filled with iron ore and half filled with fluxing agent side by side ( Figure 2). The material was compressed in the crucible so that there is a defined interface between the two materials. The sample was then heated to 1350°C, 1400°C and 1450°C at 500°C min −1 and held for 60 s before being rapidly quenched. The experiments were recorded at 15 fps. The experiments were conducted in an air atmosphere at the appropriate partial pressure of oxygen ensuring no occurrence of passive diffusion-led reduction in the iron ore during the experiment. The quenched sample was mounted in epoxy resin and polished to a flat surface using SiC grinding pads incrementally increasing the grit from 800 p through to 1500 p. The sample was polished with oil-based diamond suspensions with particle sizes from 9 μm, 3 μm to 1 μm in turn. This routine was done without the use of water so as not to effect the anhydrous CaO in the samples. This polishing routine gave a clean and flat surface ready for SEM imaging and EDS elemental analysis. FEG-SEM and EDS The scanning electron microscope use (SEM) was the JEOL 7800F model, a FEG-SEM equipped with an oxford instruments EDS detector. This instrument allowed for rapid pump down of samples after removal from a desiccator limiting interaction with ambient moisture. Results A high-grade iron ore (62% Fe) has been placed into alumina crucible along with three chosen fluxing agents. The two materials have been observed in-situ while heated to 1400°C and held for 60 s. The quenched samples have then been cut, polished and investigated via SEM and EDS. Each material pair is presented below with brief comments Limeiron ore Figure 3 shows the images from the HT-CSLM of the lime (CaO)iron ore samples. Image A shows the sample before the heating and image B shows the sample after holding at 1400°C for 60 s. At 200°C the two materials have a distinct interface between the lime on the right and the ore on the left. At 1400°C the two materials appear significantly changed, with the ore on the left showing evidence of meltingdepicted by the bright section near the centre of the image (at the original sample interface). This section was seen to be fluid and appears brighter due to the flat nature of the liquid surface allowing for a larger section of the field of view to be in focus at a single time point. Figure 4(A) displays SEM images of the sample with the lime on the left and ore on the right. The lime is seen to be unaffected by the heat and seems almost homogenous. The bulk of ore is also similar to that before the heat treatment, with the particles appearing angular and discrete showing no signs of fully melting. The images also show that there is an interface region between the two materials, which has undergone significant mixing. The interface is clearer in the EDS mapping (Figure 4(B)) where the purple region shows the elemental mixing of the two main species (Ca and Fe). The liquidus interface which has formed during the experiment appears to strongly adhere/wet the bulk lime phase preferentially. The reaction of lime and iron ore was studied further by holding the reaction at different temperatures of 1350°C (Figure 4), 1400°C ( Figure 5) and 1450°C ( Figure 6). The SEM images allowed for the measurement of liquid fraction over the three temperatures. These were measured at 16%, 32% and 41%, respectively. The increased temperature sped up the reaction and shows the progression of the reaction seen through the holding times. The molten area interface between the two materials is more defined and consistent at the higher temperatures. At 1350°C, the interface is thin and the blue calcium is dispersed throughout the molten phase. While the sample after holding at 1400°C for 60 s is thicker in width and shows less dispersion of the pure CaO flux solution interface. The progression of the reaction pathway can be viewed between these three samples with the higher temperatures showing further progression of mixing. Limestoneiron ore The HT-CSLM images in Figure 7 shows that at 200°C the limestone (right)iron ore (left) sample. Owing to the larger particle size of the limestone, the image is rougher on the surface and is not as clear as the finer particle size lime sample (Figure 3). After heating to 1400°C the image shows evidence of liquid in the sample seen in the white areas in the iron ore side of the interface. The SEM images of this sample (Figure 8(A)) show that the limestone had a substantial effect on the appearance of the ore after heating. Unlike the ore in the lime samples, the ore is more homogeneous with less distinct particles and more connected mass. The bulk limestone in the left of the image seems to look like the material before the experiment. The limestone particles near the ore interface; shown clearly in Figure 8(B), have an external reaction layer. These particles surrounded by reacting material are the closest this sample came to forming a continuous/defined reaction interface between the two materials. The test was carried out at three temperatures of 1350°C (Figure 8), 1400°C ( Figure 9) and 1450°C ( Figure 10), and the liquid fraction was measured to be 26%, 64% and 99% respectively. In the 1350°C image, the ore is one homogeneous phase, however; there is a dispersion of iron BOF slagiron ore Figure 11 shows the samples of iron ore and BOF slag as observed via HT-CSLM. The slag (right) is less bright than the fluxing phase in the previous samples but the interface is still clearly visible before the experiment. The Images taken after the heating show that unlike the previous samples, molten material is present throughout the crucible, not only in the ore side or at the interface. The SEM results from the slag sample are shown in Figure 12. The electron image shows that the iron ore (left) still displays similar properties to an un-molten sample, as there are small separated particles like that found in the lime sample. However, there is no linier reaction interface between the two phases as seen in the lime sample. At the interface, the sample appears as an exaggerated form of the limestone sample, with particles surrounded by reacting material (Figure 12(B)). In addition, the slag seems to be fully molten, which is to be expected due to its theoretical melting point, displaying a greater increase of iron content throughout the bulk phase compared to the other two fluxing materials. The test was done at three different temperatures of 1350°C (Figure 12), 1400°C ( Figure 13) and 1450°C ( Figure 14) and have a liquid fraction of 23%, 34% and 100%, respectively. In the 1350°C slag, the sample shows that the iron ore can still be seen as individual particles however, they are suspended in a matrix of the slag. The slag itself is molten, however, it seems to have no reaction with the iron ore despite flowing through the material. The 1450°C sample shows that there is one homogeneous structure. The sample has been fully molten due to the meniscus shape of the sample. Fluxing material difference By segmenting molten material from the SEM images and measuring the surface area fraction with image analysis software, the percentage of iron ore in molten state has been calculated for each material pairing. The lime and BOF slag have similar levels of molten iron ore at 1400°C with results of 32% and 34%, respectively, while limestone had a significantly higher result of 64% liquid material. Lime and BOF slag gained very similar liquid percentage however, the nature of the liquid fraction and its dispersion throughout the samples differ greatly. The lime-ore sample has a thick molten area at the interface connected to the lime. The BOF slag-ore sample shows no defined molten interface, as compared to that which was observed, in the lime-ore sample, and the molten section is spread throughout the iron ore in the sample. Furthermore, the lime in the sample is still solid and tightly packed whereas the BOF slag has fully melted into a continuous medium. Owing to the solid phase reaction between the lime and iron ore, the reaction has a defined reaction zone, which is limited by the size of the contact interface. The reaction can only progress through this liner interface, and perpetuate through the dissolution of the materials into either side of this experimentally size determined reaction interface. This reaction interface width would continue to grow in depth if the experiment was conducted for longer period, but the reaction area would remain approximately the same. The BOF-iron ore slag sample has a different reaction path due to the lower melting of the fluxing agent. Being molten at experimental temperatures the material appears to have permeated through the iron ore matrix, resulting in an exponential growth of the liquid reaction interfacial area. Although at the time point these experiments were predetermined to stop at, the liquid fraction is similar between lime-ore and BOF slag-ore samples, the extensive mixing and more complex nature of the interface topology developed by the BOF slag-iron ore reaction preludes a higher interfacial area and faster interface development through the bulk phase. The limestone sample outperformed both lime and BOF slag for liquid fraction development. The limestone follows similar conditions to the lime, however, it benefits from two key differences. The first is that limestone undergoes calcination at 900°C to form the CaO actually utilised in the reaction, the calcination reaction is shown in Equation (4) [13]. The literature has previously stated that limestone calcination results in newly formed CaO which has a higher reaction activity than aged lime in processes such as lime dissolution in slag [14]. Second the limestone used is of larger particle size than the lime, as such when a particle begins to flux the connectivity of the fluxing agent is effectively larger, allowing for wetting phenomena to surround or 'drag' the material into the molten pool. Because of the sample size usable in the HT-CSLM, the actual effect of these materials on reduction kinetics is beyond the scope of this paper as there is not enough material for any quantitative analyses to be conducted. However, the mechanisms and influencing physical factors of three key metallurgical fluxing agents have been observed and discussed as to their expected effect on reduction. This work will be combined with larger bulk furnace trials and reductive environments in the near future, enabling a direct assessment on reduction performance under fluxing effects. From the above, limestone and BOF slag could both be considered potentially useful materials to add into gas reduction processes of iron ore. The higher-order fluxing rate of limestone presents a key case for increasing liquid fraction. However, the permeation of liquid BOS slag through the bulk ore at the reaction temperatures presents a potentially exploitable fluxing pathway which will only be exacerbated by the large volume/bulk production facilities used in ironmaking (as opposed to the lime and limestone which develop a liquid reaction interface purely on a pre-determined engineered interfacial area). Based on the shrinking core model the reduction mechanism of iron ore [15,16] can be broken down into several stages including: (1) Mass transfer of reductive species in the gas phase (2) Diffusion of reaction species and products across the gas boundary layer (3) Diffusion of CO through the partially reduced ore particle (4) Reaction at the reduction frontier (5) Diffusion of CO 2 away from the reaction interface To begin with, due to the ultra-high temperature of the system and the high Gibbs free energy, it is fair to assume that the reaction at the reduction frontier is unlikely to be reduction rate limiting. In addition, the ultrasonic gas turbulence and abundant supply of reductive gas species leads the assumption that mass transfer in the gas phase is also non-rate limiting. This leaves diffusion through the gas boundary layer and transport of reduction reactants/products through the reduced layer of the ore particle as possible rate-limiting steps. Boundary layer diffusion can be qualitatively interrogated through calculation of the dimensionless Sherwood number expressed in Equation (5), using the same form as the Ranz-Marshall Correlation for heat transfer to a sphere, substitution of the Prandtl number for the Schmitt number (defined as the ratio of momentum diffusivity and mass diffusivity) (Equation (6)) gives an analogous form to appreciate the controlling factors on this variable with relation to other transport phenomena: Sh = Convective mass transfer rate Diffusion rate (5) Both the Reynolds and Schmitt numbers are calculated from variables including matter viscosity, density, diffusivity, distance and velocity. The controlling factor for most of these variables is temperature, a process parameter that is more technologically defined/restricted in industrial reduction processes rather than scientifically targetable. As a result, this leaves diffusion of reactants and products through the iron ore particles as the main way of influencing reduction kinetics. Fluxing/ formation of a liquid phase within the process is likely to have two opposing contributions to this: (1) Formations of liquid/a continuous medium reduces surface area of the iron ore and thus increases the physical reaction pathway from the bulk gas to the unreacted material inside ore particles (2) Diffusion/transport kinetics are much faster in liquid phase than in solid, not only due to high diffusion coefficients but also convection stirring within the liquid medium. Temperature The temperature of the test has shown an increase of the liquid fraction in all three of the fluxing agents but to differing degrees of influence. The temperature has the effect of speeding up reactions and giving more energy to the reactants. This determined that the interaction is governed by the diffusion rates of the ore into the flux. If the reaction rate was controlled solely by the size of the interface then there would be no difference as the temperature changes. The lime samples show this by the sharp increase of fluid fraction between 1350°C and 1400°C from 16% to 32% and another increase at 1450°C to 41%. This is due to the diffusion rate increasing in line with the greater thermal energy within the system. The limestone samples reacted more at the higher temperature with the highest temperature samples becoming molten. At the temperatures 1350°C and 1400°C when compared to the CaO samples this can be explained by the higher liquid fraction created around the core of the shrinking core. This liquidous fraction material increases and intern speeds up the rate of interaction between the materials due to the diffusion transforming from solid to solid to liquid to solid thus generating a runaway effect on the rate of interaction. The BOF slag sample suffers in terms of reaction rate due to the melting point of the slag being lower than the product of the iron ore and flux. This means that the liquid to solid interaction is reached at lower temperatures. In the highest temperature sample, the crucible shows a full miscues showing a full homogenous matrix of the ore and the slag combined. The lower temperatures have a molten slag and a solid matrix of iron ore. The temperature is important in this reaction in order to reach a critical phase reaction barrier to enable the process to take place.. Once these boundaries are met the slag can be seen as a valid option as a fluxing reagent. Conclusion The mechanistic interactions between iron ore and three common metallurgical fluxing agents (CaO, CaCO 3 and BOF slag) have been observed in-situ at temperatures of 1350, 1400 and 1450 o C and through ex-situ electron microscopy. The aim of the experiments was to begin to uncover the potential benefit that the co-injection of the reactants could provide to the reduction or prereduction of iron ore within the low carbon ironmaking technology HIsarna. In addition, the specific use of BOF slag offers a recycling and metallic content reclamation potential within integrated steelworks improving overall yield. Experimentally the fluxes can be ranked on liquid fraction formed over the same defined reaction period as limestone >> slag ≥ lime. The limestone results show that newly formed CaO reactant species offer increased fluxing reaction kinetics. Despite this, the use of a fluxing agent which is operating above its melting point (BOF slag) presents a technological advantage when scaleup is considered for industrial application. Liquid fluxes will mix faster and more consistently with the ore, allowing for the formation of an overall larger reaction interface with greater growth potential throughout the bulk process.
2021-09-27T20:56:06.529Z
2021-07-16T00:00:00.000
{ "year": 2021, "sha1": "45960415d99953a1e3a7adadaf7e909a62f445ed", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/03019233.2021.1948316?needAccess=true", "oa_status": "HYBRID", "pdf_src": "TaylorAndFrancis", "pdf_hash": "ae68abc2d81918243b85ba3e4da57f7cadb23230", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Environmental Science" ] }
215645609
pes2o/s2orc
v3-fos-license
Natural variation in cross-talk between glucosinolates and onset of flowering in Arabidopsis Naturally variable regulatory networks control different biological processes including reproduction and defense. This variation within regulatory networks enables plants to optimize defense and reproduction in different environments. In this study we investigate the ability of two enzyme-encoding genes in the glucosinolate pathway, AOP2 and AOP3, to affect glucosinolate accumulation and flowering time. We have introduced the two highly similar enzymes into two different AOPnull accessions, Col-0 and Cph-0, and found that the genes differ in their ability to affect glucosinolate levels and flowering time across the accessions. This indicated that the different glucosinolates produced by AOP2 and AOP3 serve specific regulatory roles in controlling these phenotypes. While the changes in glucosinolate levels were similar in both accessions, the effect on flowering time was dependent on the genetic background pointing to natural variation in cross-talk between defense chemistry and onset of flowering. This variation likely reflects an adaptation to survival in different environments. Introduction To maximize plant fitness in challenging, ever-changing and unpredictable environments, organisms must coordinate growth and defense to respond to diverse combinations of biotic and abiotic factors. Interactions between biotic and abiotic factors force growth and defense to coevolve for local adaptation. Optimizing this necessary co-evolution requires the combination of various mechanistic solutions including the plasticity provided by regulatory networks to respond to environmental changes. Additional solutions are provided by the diversity inherent in genetic variation within a plant species, consequently providing different solutions depending on the environment (Pigliucci, 2003). Together, regulatory networks and genetic variation establish the potential for diverse solutions across a species to optimize growth and defense across highly varied environments (Burow et al., 2010;Paul-Victor et al., 2010;Woods et al., 2012). To identify genes involved in cross-talk between growth and defense, we focused on Arabidopsis natural variation studies. Numerous studies have identified key genes controlling natural variation in the plastic timing of flowering time and thereby reproduction (Koornneef et al., 1991;Michaels and Amasino, 1999;Johanson et al., 2000;El-Din El-Assal et al., 2001;Salomé et al., 2011;Ward et al., 2012;Grillo et al., 2013;Méndez-Vigo et al., 2013). Similarly, natural variation studies in the primary Arabidopsis defense compounds, the glucosinolates produced from tryptophan (indole glucosinolates) and methionine (aliphatic glucosinolates), have aimed at understanding the diversity in glucosinolate profiles (Kliebenstein et al., 2001a,b;Hirai et al., 2005;Keurentjes et al., 2006;Wentzell et al., 2007;Rowe et al., 2008;Jensen et al., 2014). Cross-talk between the networks controlling flowering and glucosinolate profiles seems to occur, as glucosinolate biosynthetic genes within the GS-AOP locus have been associated with not only glucosinolate biosynthesis, but also the control of onset of flowering (Figure 1), (Atwell et al., 2010;Kerwin et al., 2011). Hence, the genes in this locus may help illuminate the cross-talk between defense and flowering time in Arabidopsis. Based on the catalytic properties of AOP2 and AOP3, we hypothesized that the previously observed difference in these two genes' potential regulatory effects may depend on their FIGURE 1 | AOP2 and AOP3 have been associated with natural variation in different phenotypes. The GS-AOP locus encoding the glucosinolate biosynthetic enzymes AOP2 and AOP3 has been associated with variation in glucosinolate profiles due to their enzymatic activities. The same genes have been linked to changes in glucosinolate levels and onset of flowering in different natural variation studies. enzymatic function and substrate availability, i.e., the levels of 3msp and 4msb. The production of these substrates is largely controlled by the allelic status of GS-ELONG encoding different methylthioalkylmalate synthases. In the absence of a functional AOP2 or AOP3, the presence of MAM1 (methylthioalkylmalate synthase 1) leads to accumulation of 4msb as the major SC aliphatic glucosinolate, while accumulation of 3msp is attributed to the presence of MAM2 (Figure 2), (de Quiros et al., 2000;Kroymann et al., 2001Kroymann et al., , 2003. GS-ELONG and GS-AOP show an epistatic interaction for glucosinolate accumulation (Kliebenstein et al., 2001b). This epistasis might be linked to the differences in AOP2/3 substrate availability depending on the allelic state at GS-ELONG, which would hint at specific feedback effects of the different products formed by AOP2 and AOP3. In addition to their enzymatic and regulatory role in glucosinolate biosynthesis, both AOP2 and AOP3 have also been linked to flowering time control. Introduction of a functional AOP2 to the AOP null accession Col-0 shortened the circadian clock period and altered flowering time (Kerwin et al., 2011). In contrast, the AOP3 gene was found to have a hit in a genome wide association study to the control of the transcript level for a key flowering regulator, the MADS-box transcription factor FLC (Flowering Locus C), (Atwell et al., 2010). Yet the associations between the AOPs and flowering time remain to be tested in different accessions. However, AOP2 and AOP3 may mediate cross-talk between chemical defense i.e., glucosinolates and onset of flowering, which is critical for adaptation to environmental settings. Hence, we tested if these two enzyme encoding genes, AOP2 and AOP3, have different effects on glucosinolate levels and flowering time in two different accessions, Col-0 and Cph-0, which express neither AOP2 nor AOP3. We introduced AOP2 and AOP3 into the two AOP null accessions that differ at the GS-ELONG locus and thus accumulate different AOP substrates. The use of multiple transgenic lines enable direct comparison of the effects of the three different AOP alleles (AOP2, AOP3, AOP null ) on glucosinolate accumulation and flowering time. Thus, this approach allowed us to systematically investigate the differences in the regulatory roles of all three AOP alleles, which is not possible in natural variation studies such as GWAS or in QTL mapping e.g., using RILs or F2 populations. The introduction of AOP2 and AOP3 led to different changes in the glucosinolate profiles in the two accessions. While the regulatory effects on glucosinolate levels were similar in both accessions, our study shows that AOP2 and AOP3 possess different abilities to change flowering time dependent on the genetic background. Generation of Expression Constructs For generation of expression constructs, we extracted genomic DNA from leaf tissue using the CTAB method (Clarke, 2009). The genomic sequence of AOP2 was amplified from Col-0 gDNA expressing the AOP2 allele of B. oleracea BoGSL-ALK under the control of the CaMV 35S promoter (Li and Quiros, 2003) with the primers 5 ′ -ggcttaauATGGGTG CAGACACTCCTCAAC-3 ′ and 5 ′ -ggtttaauTTATGCTCCAGAG FIGURE 2 | Enzymatic functions of MAMs and AOPs in the aliphatic glucosinolate pathway. The chain length of aliphatic glucosinolates is controlled by GS-ELONG: expression of MAM2 in the absence of MAM1 leads to C3 glucosinolates, MAM1 is required for the production of C4 glucosinolates, and MAM3 is responsible for the production of LC glucosinolates with C8 as the predominant chain length. The C3 glucosinolate, 3-methylsulfinylpropyl glucosinolate (3msp), can be converted to 3-hydroxypropyl glucosinolate (3ohp) by AOP3 or to 2-propenyl glucosinolate (2-prop) by AOP2. The C4 glucosinolate, 4-methylsulfinylbutyl glucosinolate (4msb), is converted to 3-butenyl by AOP2 glucosinolate (3-but), which is further converted by GS-OH to 2(R/S)-hydroxy-3-butenyl glucosinolate. Here, we report that 4msb can also be converted by AOP3 to give 4-hydrozybutyl glucosinolate (4ohb), (see Figure 3). Generation of Transgenic Plants Agrobacterium tumefaciens (strain PGV38 c58) was transformed with either the 35S:AOP2 or the 35S:AOP3 expression construct for transformation of Col-0 and Cph-0 by floral-dip (Clough and Bent, 1998). Positive transformants carrying the 35S:AOP2 construct were selected on MS plates with 50 µM kanamycin. T1 seeds carrying the 35S:AOP3 construct were selected by repeated spraying with 300 µM Basta at the 4-leaf stage. In the T1 and subsequent generations, the presence of the transgenes was confirmed by PCR on genomic DNA. For AOP2, we obtained multiple insertion lines in both Col-0 and Cph-0, whereas for AOP3 multiple lines were obtained in Cph-0, but only one line in Col-0. Plant Growth For all experiments, seeds were sown in a randomized design and cold stratified at 4 • C for at least 2 days. The plants were grown in climate chambers at 80-120 µE/(m 2 * s) light intensity, 16 h light, 20 • C, and 70% relative humidity. Flowering time was measured as the period between cold stratification and the day, when the inflorescence reached 1 cm in height and normalized to the corresponding WT. Statistics We used R version 3.0.1 (2013-05-16) for statistical analysis (Team, 2013). For the WT and insertion lines significance was tested using the lm and Anova function for the following linear model GLS = Experiment + Genotype + insertion line nested within Genotype + Experiment:Genotype with specific differences tested post-hoc using the pairwise t-test function with a Holm-adjustment for multiple testing. Summary statistics was calculated using the SummaryBy from the doBy package (Højsgaard and Halekoh, 2014). Generation of the Col-0 × Cph-0 F2 Population To create the Col-0 × Cph-0 F2 mapping population, Col-0 and Cph-0 WT plants were grown until the flowering stage and Cph-0 was used to pollinate Col-0. The F1 plants were genotyped for MAM1 and MAM2 to insure the presence of a copy of each allele, i.e., heterozygosity, before the F1 plants were selfed. The F2 population was investigated for flowering time by measuring the number of days from stratification till the primary inflorescence reached 1 cm (File S1 in Supplementary Table 1). For 171 plants, we obtained both genotype and phenotype results, which could be used in QTL mapping. Genotyping by MassARRAY For genotyping, genomic DNA was extracted using the Qiagen DNeasy 96 Plant Kit (Qiagen) according to manufacturer's protocol. 100 sites were chosen for Sequenom MassARRAY in an attempt to get full coverage of the genome. However, we did not have any previous knowledge on the Cph-0 accession and only 53 of the SNPs turned out being polymorphic between the parents. These SNPs were used to generate genetic maps for each mapping population using the Haldane function (File S2 in Supplementary Table 1). QTL Mapping Windows QTL Cartographer Version 2.5 was used for composite interval mapping determining significant thresholds for flowering time by doing 1000 permutations to estimate the 0.05 significance levels (Wang et al., 2012). The main-effect markers were validated and tested for Two-Way epistatic interactions using lm type II and ANOVA in R version 3.0.1 (2013-05-16) including the most significant marker for each QTL. Results AOP2 and AOP3 Alter Glucosinolate Profiles in Col-0 and Cph-0 Biosynthesis of the glucosinolate core structure gives rise to methylthioalkyl glucosinolates, which are further converted to methylsulfinylalkyl glucosinolates by GS-OX FMO1-5 Sønderby et al., 2010). Methylsulfinylalkyl glucosinolates accumulate in AOP null accessions, i.e., accessions that express neither a functional AOP2 nor AOP3 in leaves (Figure 2). To investigate the relative effects of AOP2 and AOP3, we introduced them to two AOP null accessions accumulating different SC methylsulfinylalkyl glucosinolates. We used the Col-0 accession accumulating mainly 4msb and a so far undescribed accession accumulating 3msp as its major SC glucosinolate, Copenhagen-0 (Cph-0). In agreement with previous work using the Brassica oleracea or Arabidopsis accession Pi AOP2 (Li and Quiros, 2003;Wentzell et al., 2007;Neal et al., 2010), constitutive expression of AOP2 in the Col-0 background lead to the formation of 2-propenyl and 3-butenyl glucosinolate from 3msp to 4msb, respectively (Figures 2, 3B,C, for all individual glucosinolates see Figure S1). The presence of a functional GS-OH within Col-0 further modified the 3-butenyl side chain to 2R-and 2S-2-hydroxy-3butenyl ( Figure 3C), . AOP3 expression in Col-0 led to the conversion of 3msp to 3ohp and interestingly, we also detected small amounts of 4-hydroxybutyl glucosinolate (4ohb), (Figures 3B,C). Thus, AOP3 appears to be able to convert 4msb to 4ohb in planta, an activity that could not be detected in vitro (Kliebenstein et al., 2001c). However, based on the ratios of substrates and products in Col-0, AOP3 seems to have a preference for 3msp. In the Cph-0 background, 4msb is absent and instead, 3msp is the major SC glucosinolate. Consequently, introduction of the enzymes into the Cph-0 led to fewer different glucosinolate structures than in Col-0. As expected, AOP2 expression resulted in the formation of 2-propenyl glucosinolate from 3msp, whereas AOP3 formed 3ohp from the same substrate ( Figure 4B, for all individual glucosinolates see Figure S1). AOP2 and AOP3 have Differential Effects on Glucosinolate Levels in Col-0 and Cph-0 Several studies have suggested that both AOP2 and AOP3 influence the total level of glucosinolate accumulation as well as the specific structures being produced (Kliebenstein et al., 2001b;Wentzell et al., 2007;Rohr et al., 2009Rohr et al., , 2012Brachi et al., 2015). To directly compare the regulatory capacity of both genes, we quantified the glucosinolates in our different AOP2 and AOP3 lines. The introduction of AOP2 to Col-0 led to a several fold increase in SC glucosinolates (Figure 3A), which is in agreement with previously published studies (Wentzell et al., 2007;Burow et al., 2015). The high levels of SC glucosinolates correlate with high accumulation of glucosinolates down-stream of AOP2, i.e., 2-propenyl, 3-butenyl, and 2R/2S-2-hydroxy-3-butenyl (Figures 3B,C). Introduction of AOP2 to Col-0 does, however, not only change the levels of SC but also LC glucosinolates, which are not AOP2 substrates ( Figure 3A). In contrast to AOP2, the introduction of AOP3 into Col-0 did not change the total accumulation of SC or LC glucosinolates ( Figure 3A). Thus, in the Col-0 background, AOP2 and AOP3 might differ in their Frontiers in Plant Science | www.frontiersin.org ability to regulate glucosinolate accumulation possibly depending on their difference in enzymatic properties. However, as only one AOP3 line was obtained in Col-0, we cannot completely rule out a regulatory effect of AOP3 on glucosinolate accumulation in this background. In this line, AOP3 expression caused a larger decrease in 3msp than AOP2 and 4msb levels were only significantly decreased in the AOP2 lines (Figures 3B,C). This difference may be critical in case the removal of substrates is important for the regulatory role through a feed-back mechanism. However, if the regulatory role of AOP2 depends on its enzymatic activity, not only the removal of substrates but also the accumulation of the specific glucosinolate products may mediate this function. Next, we investigated potential regulatory effects of AOP2 and AOP3 in Cph-0 that accumulates 3msp as the major SC glucosinolate. Cph-0 expressing AOP2 accumulated high levels of SC glucosinolates (Figure 4A). Similar to the effect of AOP2 expression in Col-0, increased SC glucosinolate levels correlated with the amounts of the AOP2 product ( Figure 4B). In Cph-0, AOP2 only converted 3msp to 2-propenyl, suggesting that the effect of AOP2 on SC may not depend on the chain length of the available substrate. In contrast to Col-0, introduction of AOP2 to Cph-0 did not significantly affect LC glucosinolate accumulation. Thus, the regulatory effect of AOP2 on LC glucosinolates might differ between the two accessions. Similar to the Col-0 accession, AOP3 did not change the levels of total SC or LC glucosinolates in Cph-0 ( Figure 4A). Even though the AOP2 and AOP3 can both convert 3msp, AOP2 seems to more efficiently pull from the 3msp pool than AOP3 and possibly thereby increase the flux through the pathway in the Cph-0 accession. Taken together, AOP2 has a larger effect on the regulation of glucosinolate levels than AOP3 in both accessions, which might be explained by the difference in their catalytic activities. Effects on Flowering Time Differ between AOP2 and AOP3 and Depend on the Genetic Background The glucosinolate biosynthetic genes AOP2 and AOP3 represent candidate genes for the integration of defense with reproduction as they are associated with the control of flowering time in both the laboratory and the field (Atwell et al., 2010;Kerwin et al., 2011Kerwin et al., , 2015. To test the ability of AOP2 and AOP3 to link glucosinolates and flowering time in different backgrounds, we measured flowering time in all of our lines (Figure 5). AOP2 has been identified as a QTL for altering circadian clock parameters and thereby flowering time (Kerwin et al., 2011). Accordingly, introduction of a functional AOP2 into Col-0 under 16 h light delayed flowering time by several days (Figure 5A). AOP3 has been associated with natural variation in flowering time and gene expression level of the MADS-box transcription factor FLC (Flowering Locus C), which is one of the major determinants of flowering (Shindo et al., 2005;Atwell et al., 2010). Analysis of the Col-0 AOP3 line showed no significant difference between Col-0 WT and Col-0 AOP3 lines ( Figure 5A). Thus, AOP2 but not AOP3 seems to influence onset of flowering in Col-0. Black Col-0 WT, n = 110, light gray Col-0 AOP2, n = 50, (2 independent insertion lines), and dark gray Col-0 AOP3, n = 32, (1 line). ANOVA with nesting and experiment interaction (min 2 repeats) shows that Col-0 AOP2 is significantly different from Col-0 WT and Col-0 AOP3, P < 0.001, whereas P = 0.45 for the Col-0 WT and Col-0 AOP3 comparison. (B) Flowering time relative to Cph-0 WT (41.6 days ±5.3). Cph-0 WT (black), n = 60, Cph-0 AOP2 (light gray), n = 73 (3 independent insertion lines), and Cph-0 AOP3 (dark gray), n = 60 (3 independent insertion lines). ANOVA with nesting of the different insertion lines and experiment interaction (two repeats) showed no significant difference between Cph-0 WT and the insertion lines; Cph-0 WT and Cph-0 AOP2 (P = 0.06) and Cph-0 WT and Cph-0 AOP3 (P = 0.22). Cph-0 AOP2 and Cph-0 AOP3 showed a significant difference (P < 0.01). In contrast to its pronounced delay on flowering in Col-0, AOP2 had a suggestive ability to speed up flowering in Cph-0 using three independent transgenic lines (nested ANOVA; P = 0.06; Figure 5B). Similarly, AOP3 expression in the Cph-0 background did not significantly change flowering time compared to the WT ( Figure 5B). However, the AOP2 and AOP3 lines in Cph-0 showed a significant difference in flowering time from each other (P < 0.01), indicating that AOP2 and AOP3 have significant opposite effects on flowering time in Cph-0, which differs from the effects in Col-0. Flowering Time Network Architecture is Critical for the Regulatory Effects of AOP2 and AOP3 While AOP2 and AOP3 had similar effects on glucosinolate accumulation across Col-0 and Cph-0, each gene differs in its ability to alter flowering time in the two accessions. One factor that may contribute to this difference is variation in the internal flowering time pathways in the two accessions. The Col-0 WT flowers earlier than Cph-0 WT indicating differences in the flowering networks. To identify the underlying loci that contribute to these differences and the variation in the AOP2/3 effect on flowering, we established a Col-0 × Cph-0 F2 population to map QTLs that control the major difference in flowering time between Col-0 and Cph-0. We genotyped 171 F2 plants for 100 SNPs using Sequenom MassARRAY and this resulted in identification of 53 polymorphic sites. For the Cph-0 accession with no prior sequence knowledge, we compared the SNP combination to the 1001 genome accession database (Cao et al., 2011;Schneeberger et al., 2011;Long et al., 2013) and did not find any annotated accession to have the same combination. The F2 population was also phenotyped for flowering time. QTL mapping for flowering time revealed two loci (X29 and X188) as the major QTLs (Figure 6, Table 1). We found FT and FLC as the top candidate genes in these loci (candidate gene list from Grillo et al., 2013). Therefore, we analyzed the transcript levels of FT and FLC in the two accessions and found FT expression in Cph-0 to be 1-2% of the transcript level in Col-0, while FLC transcript levels were around 500 times higher in Cph-0 than in Col-0 ( Table 2). This is in agreement with the observed difference in flowering time between the two accessions, as FLC delays flowering time by repressing FT expression. This difference suggests that the ability to detect the influence of the AOP2 and AOP3 genes on flowering may depend on the allelic status at these known major effect flowering time genes in Arabidopsis. Further work is needed to assess how the AOP2 or AOP3 genes interplay with these known flowering time genes. Discussion In the field, glucosinolate profiles and herbivory resistance are strongly dependent on GS-AOP and GS-ELONG (Bidart-Bouzat and Züst et al., 2012;Brachi et al., 2015;Kerwin et al., 2015). Hence, the structures and levels of the glucosinolates produced depend on the allelic status of these two loci and their interaction plays an important role for plant fitness. The introduction of AOP2 into the AOP null accessions Col-0 and Cph-0 causes accumulation of alkenyl glucosinolates together with increased levels of aliphatic glucosinolates (Figures 3, 4), (Li and Quiros, 2003;Wentzell et al., 2007;Burow et al., 2015). In contrast, AOP3 expression in the same backgrounds led to formation of hydroxyalkyl glucosinolates without associated changes in glucosinolate levels. Thus, the production of alkenyl but not hydroxyalkyl glucosinolates influences the feedback regulation of glucosinolate biosynthesis in Cph-0 and possibly also in Col-0. Both AOP2 and AOP3 convert the same substrates, which suggests that it is the product being produced and not the substrate that is the determining factor. Consequently, the increased flux from primary metabolism into specialized metabolism in the presence of AOP2 may depend on the products of the enzyme mediating positive feedback regulation of the pathway. Recent work has suggested that this may occur by altering the jasmonate signal transduction pathway in lines that contain a functional AOP2 gene (Burow et al., 2015). AOP2 significantly increased levels of aliphatic glucosinolates in both Col-0 and Cph-0 that differ in their GS-ELONG allelic status. This suggests that variation in the presence of 3C or 4C substrate availability is not the major determinant for the control of SC glucosinolates by the interaction of AOP2 and GS-ELONG. Only in the Col-0 background, LC glucosinolate levels were also significantly increased in the AOP2 lines, which could suggest that this effect requires the presence of C4 glucosinolates or variation in the MAM3, which catalyzes the production of the LC glucosinolates. The fact that LC levels differ between the two WTs suggests that there is MAM3 variation or in the regulatory network controlling LC glucosinolate accumulation. As previously found, we could show that AOP2 delays flowering time in Col-0 (Kerwin et al., 2011), but there was only a suggestive effect of AOP2 on flowering in the Cph-0 background ( Figure 5). Mapping flowering time variation between Col-0 and Cph-0 suggested that these two accessions have natural variation in FLC and FT suggesting that this difference in the AOP2 effect on flowering time may be due to interactions with the known flowering time pathways. Supporting this hypothesis, AOP2 has been shown to alter the circadian clock pathway that affects flowering time via regulation of FT (Kerwin et al., 2011). Thus, the regulatory effect of AOP2 on flowering time in Col-0 through FT might be altered by the competing regulation by the 500 times higher FLC levels in Cph-0. Recently, AOP2 was moreover found to mediate positive feedback regulation between the aliphatic glucosinolate biosynthetic pathway and jasmonate signaling (Burow et al., 2015). More specifically, expression of AOP2 in Col-0 led to increased transcript levels of MYC2. Interestingly, MYC2 is not only a key regulator of jasmonatemediated plant responses (Lorenzo et al., 2004;Dombrecht et al., 2007) including glucosinolate biosynthesis, but also involved in circadian oscillation of jasmonate signaling through direct interaction with the clock component TIME FOR COFFEE (Shin et al., 2012). The ability of AOP2 to alter flowering time in Col-0 might thus be linked to its regulatory input into jasmonate signaling. In contrast to AOP2, no clear effect was observed for AOP3 in either accession. Yet, AOP3 has been linked to natural variation in FLC expression (Atwell et al., 2010) indicating that AOP3 provides input to the flowering time network through a different mechanism than AOP2. If AOP3 can play a regulatory role in flowering time at all, the effect is minor and possibly requires a specific allelic status at the flowering time genetic loci. Our study illustrates natural variation in the cross-talk between glucosinolate accumulation and flowering time. The ability to fine-tune this cross-talk differs between the two enzyme-encoding genes, AOP2 and AOP3, even though they arose from a recent gene duplication event and share high sequence similarity (Kliebenstein et al., 2001c). To fully understand the role of the GS-AOP locus in fine-tuning the regulatory cross-talk between glucosinolate profiles, jasmonate signaling, and the onset of flowering time, future studies in a larger number of genetic backgrounds will be required. Most importantly, studies in accessions expressing a functional AOP2 or AOP3 will reveal potential differences in the architecture of the regulatory networks in these backgrounds. Nevertheless, the specific regulatory effects and the dependency on the genetic background possibly reflect the plant's need to coordinate defense and reproduction when faced with different combinations of biotic and abiotic challenges. Hence, Arabidopsis seems to have evolved different cross-talk mechanisms linking defense and flowering time phenotypes to adapt to different environments. Author Contributions LJ, DK, and MB designed the study and interpreted the data. LJ and MB conducted the plant work. LJ, MB, and DK did the statistical analyses. HJ designed and carried out expression analyses. LJ and MB wrote the paper. DK and BH commented on the manuscript.
2016-05-04T20:20:58.661Z
2015-09-08T00:00:00.000
{ "year": 2015, "sha1": "db636943490d78099c3b98610564f2be869af86a", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2015.00697/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d71f7d7478acbf1c5b8e43fba7b56e21a2f1d59a", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
234488000
pes2o/s2orc
v3-fos-license
VR Setup to Assess Peripersonal Space Audio-Tactile 3D Boundaries Many distinct spaces surround our bodies. Most schematically, the key division is between peripersonal space (PPS), the close space surrounding our body, and an extrapersonal space, which is the space out of one’s reach. The PPS is considered as an action space, which allows us to interact with our environment by touching and grasping. In the current scientific literature, PPS’ visual representations are appearing as mere bubbles of even dimensions wrapped around the body. Although more recent investigations of PPS’ upper body (trunk, head, and hands) and lower body (legs and foot) have provided new representations, no investigation has been made yet concerning the estimation of PPS’s overall representation in 3D. Previous findings have demonstrated how the relationship between tactile processing and the location of sound sources in space is modified along a spatial continuum. These findings suggest that similar methods can be used to localize the boundaries of the subjective individual representation of PPS. Hence, we designed a behavioral paradigm in virtual reality based on audio-tactile interactions, which has enabled us to infer a detailed individual 3D audio-tactile representation of PPS. Considering that inadequate body-related multisensory integration processes can produce incoherent spatio–temporal perception, the development of a virtual reality setup and a method to estimate the representation of the subjective PPS volumetric boundaries will be a valuable addition for the comprehension of the mismatches occurring between body physical boundaries and body schema representations in 3D. INTRODUCTION In the last two decades, we have witnessed a rising interest in neuroscience regarding cross-modal and multisensory body representations (Maravita et al., 2003;Holmes et al., 2004) and their influences on the mental division of external spaces and 3D spatial interactions (Grüsser, 1983;Previc, 1990;Previc, 1998;Cutting and Vishton, 1995;Maravita et al., 2004;De Vignemont and Iannetti, 2015;Postma et al., 2016). Many definitions evoking body spatial representations exist and as a result, much confusion arises between the concept of body schemas (Head and Holmes, 1911;Bonnier, 2009) and body image representations (Gallagher, 1986). Originally, peripersonal space (PPS) is based dominantly on visual-tactile neurons, first observed in electrophysiological studies in monkeys Macaca fascicularis (Rizzolatti 1981;Graziano and Gross, 1993). In our study, we chose to focus on body schemas egocentric representations as insofar it represent how the body dictates the movement it performs (De Vignemont, 2010;De Vignemont, 2018) and is an unconscious experience of spatiality which relies on multisensory integration mechanism closely involved in the dynamic representations of PPS spatial encoding (Spence et al., 2008;Brozzoli et al., 2012), which interest us. PPS, the close space surrounding the body (Brain, 1941;Previc, 1988;Rizzolatti et al., 1997;Noel et al., 2015a;Di Pellegrino and Làdavas, 2015;Graziano, 2017;Hunley et al., 2018), can be traced back in visual representation all the way to the drawing of the "Vitruvian man" by Leonardo da Vinci, which depicted the human body anatomical configuration and proportions (1490). Continually, the topic concerning PPS spatial representation has been explored in different domains of visual arts over the centuries (sculpting, painting, and drawing), performing arts (dance, music, theater, fencing, etc.), and scientific domains. For instance, we could trace in sixteenth century Spain the origin of PPS in the fencing discipline named "destreza" based on the application of geometrical principles which determine an imaginary sphere in order to conceptualize distances and movements between the opponents. More recently, in the turn of twentieth century, the choreographer Rudolph Laban's choreutics theory (Von Laban, 1966) has linked his studies of movement with Pythagorean mathematics and formulated the concept of a kinesphere in order to characterize the space surrounding one's body "within reaching possibilities of the limbs without changing one's place" (Dell et al., 1977). In our view, the kinesphere indicates a deep understanding of the interactive and enactive properties inherent to embodied perception and cognition. Enactive is related to situations when the simple perception or recollection of the body motor Frontiers in Virtual Reality | www.frontiersin.org May 2021 | Volume 2 | Article 644214 2 action produces the activation of motor cortical area (Keysers et al., 2004;Gallese et al., 2009). Interactive relates to when the body acts as an interface for the planning and the execution of motor actions in its environment. One could even guess the origin of the PPS visual-rounded shape from the latest etymology of the word "sur-round-ing" as built upon man's body sphere of action. Indeed, relating to our perception, the kinesphere could be considered a forerunner model of PPS as it exhibits dynamic and plastic features of its spatial boundaries that can either extend or shrink (Von Laban, 1966), which appears much similar to PPS. Thus, PPS size can be modified and remapped according to a long inventory of factors like a subject's arm length (Longo et al., 2007;Lourenco et al., 2011), subject's handedness (Hobeika et al., 2018), and the choice of the stimulated body parts . Its size, however, can also be modified with tools use (Maravita and Iriki, 2004;Làdavas and Serino, 2008) as diverse as a rake (Farnè and Làdavas, 2000;Bonifazi et al., 2007;Farnè et al., 2007), a stick (Làdavas, 2002;Gamberini et al., 2008), a laser pointer (Gamberini et al., 2008), a computer mouse (Bassolino et al., 2010), a rubber hand (Lloyd, 2007), a dummy hand (Makin et al., 2008), a 3D virtual hand (D'Angelo et al., 2018), an avatar disconnected from the subject body (Mine and Yokosawa, 2020), a mirror (Holmes et al., 2004;Làdavas et al., 2008), body shadows (Pavani et al., 2004), and a wheelchair (Galli et al., 2015). The integration within PSS boundaries of external objects highlights the flexibility of the brain multisensory spatial PPS representations (De Vignemont and Iannetti, 2015;Dijkerman, 2017). Furthermore, PPS spatial boundaries can also be modulated without tool use (Berti and Frasinetti, 2000;Serino et al., 2015b). They can change by performing walk-and-reach movements (Berger et al., 2019) according to certain laws of physics, that is, gravitational cues (Bufacchi, et al., 2015); personality traits, that is, anxiety (Sambo and Iannetti, 2013;Iachini et al., 2015a); social perception of other persons' body and facial postures (Ruggiero et al., 2017;Cartaud (Fini et al., 2014;Pellencin et al., 2018) but also the social perception of others' behaviors (Iachini et al., 2015b) and attitudes (Teneggi et al., 2013). Even certain phobias and psychiatric conditions can modulate PPS shapes and dimensions, that is, claustrophobia (Lourenco et al., 2011), cynophobic fear (Taffou et al., 2014), anorexia nervosa (Nandrino et al., 2017), schizophrenia (Delevoye-Turrell et al., 2011;Noel et al., 2017), autism spectrum disorder (Noel et al., 2017), and various apraxic syndromes, that is, mirror apraxia (Binkofski et al., 2003). All the non-exhaustive enumerated factors listed above are giving rise to particular shapes of PPS spatial representations. For the investigation of PPS distinct spatial configurations, recent studies in neuroscience are using a vast panoply of means to an end. Some are looking at the neural basis of PPS by paying particular attention to the activity of the firing rates and receptive fields (RF) of multisensory neurons located in the fronto-parietal network that includes the premotor cortex (Graziano et al., 1994;Fogassi et al., 1996;Lloyd et al., 2003a;Ehrsson et al., 2004), together with the intraparietal sulcus and the lateral occipital complex (Makin et al., 2007) as well as the posterior parietal cortex (Bremmer et al., 2001). Although the premotor, intraparietal, and parietal associative areas have been found to be the functional regions more specifically involved in PPS multisensory representations (Serino et al., 2011;Clery et al., 2015), other studies have used a more psychophysical computational approach or are designing neuropsychological and psychophysical studies (Canzoneri et al., 2012;Teneggi et al., 2013;Noel et al., 2018) to assess PPS proxy limits and flexibility using experimental setups in real and virtual reality (Iachini et al., 2016;Lee et al., 2016) as well as mixed reality environment . Even if these setups have been proved as valid techniques in neurophysiological studies, only the latter is using in its virtual setup real subjects and not only avatars. However, 3D description of PPS remains unexplored because the method involves visual stimuli. When reviewing the current psychophysical and behavioral tasks that were already used to measure and identify PPS spatial signature, our motivation was to find the best method which could be applied in a VR setup. Until now behavioral tasks such as hand-blink reflex to map defensive peripersonal space (Sambo et al., 2012;De Vignemont and Iannetti, 2015;Bufacchi and Iannetti, 2016), line bisection tasks (Halligan and Marshall, 1991;Longo and Lourenco, 2006), cross-modal congruency tasks (Spence et al., 2000;Lloyd et al., 2003b), and visuo-tactile tasks (Brozzoli et al., 2010;Noel et al., 2015b) have been carried out. We chose to adapt an audio-tactile interaction task paradigm (Canzoneri et al., 2012;Teneggi et al., 2013) to a VR setup in order to infer PPS boundaries around the subjects' body in 360°. The setup in VR allowed us using anechoic conditions to accurately eliminate noises (i e., reflection, reverberation, and Doppler shift) from our virtual binaural audio space and enabled the design of specific sound localizations (elevation, azimuth, and distance). In ambient space, it is challenging to record a sound object for every direction (Yairi et al., 2009), and therefore it fits our motivation for defining PPS 3D boundaries. Our goal in this study is to provide a phenomenological description of PPS three dimensional audio-tactile boundaries in relation to its egocentric body schema-related representation. Audio-tactile stimuli were preferred over visuo-tactile ones (Kandula et al., 2017). We used both flat and dynamic sound stimuli instead of receding and looming sound stimuli because our aim was to acquire PPS 3D boundaries for dynamic and nondynamic sound stimuli and not to deduce the boundaries from a comparative approach of reaction times (RT). Therefore, receding sounds were replaced by flat sounds (Ferri et al., 2015;Ardizzi and Ferri 2018). The PPS obtained by RT thresholds will be referred to as the subjective PPS and set against an objective PPS defined as the subject reachable space from a static position based on subjects' arm lengths. Participants Eight healthy participants took part in the study (3 males: average age 27 ± 1 years and 5 females: 24 ± 2 years). We chose a small sample for our pilot in order to ease its implementation. The participants were recruited from the Weizmann Institute of Science and the Faculty of Agriculture students in Rehovot and were remunerated by 50 ₪ per hour. All the participants gave their written informed consent to participate in the study. The study was performed in accordance with the Declaration of Helsinki and approved by the Institutional Review Board (IRB) of the Weizmann Institute. As self reported, all the participants had no hearing, touch, or visual impairments which caused them to wear glasses or contact lenses and no known history of neurological or psychiatric disorders. Material Our setup (see Figure 1) is based on previous studies (Canzoneri et al., 2012;Noel et al., 2015a;Serino et al., 2015a;Pfeiffer et al., 2018), which have demonstrated how the relationship between tactile processing and the location of sound sources in space, modified along a spatial continuum, can be used to localize the boundaries of peripersonal space representation. Relying on these findings, we designed an audio-tactile interaction task in a virtual reality using a Unity 3D game engine (version 2017.4.40) and run with the HTC Vive System (HTC Vive Virtual Reality System, 2017). The Vive System was set to track an area of 4 × 4 m which includes the HTC Vive headset at a refresh rate of 90 Hz with 110 degree field of view and a display resolution of 1,080 × 1,200 (2,160 × 1,200 combined pixels). It also includes a pair of hand controllers (dual-stage trigger) and uses a TPCast wireless adapter (wireless signal at 60 GHz with less than 2 ms latency). Stimuli The audio stimuli were displayed using HTC Vive headset ambisonic features to render a spherical soundscape around the subject's body and the tactile stimuli were delivered using the Woojer haptic strap (https://www.woojer.com/technology/). To create, process, and control the directionality, intensity, and frequency of each axis of our audio input signal, we used the recently developed open-source audio 3D Tune-in Toolkit to render binaural spatialization (Cuevas-Rodriguez et al., 2019) and a custom MATLAB script to automatically create the audio stimuli for each subject based on subjects head circumference parameters. The files were imported into a Unity 3D game engine in wav format. Furthermore, the toolkit was developed to support people using hearing aid devices to gain optimal accuracy in the spatialization of the sound (angle and distance) and allowing the customization of subjects' interaural time difference (IDT). The IDT was simulated separately from the HRIR and calculated with the specific user-inputted head circumference for each of the eight subjects. Audio Stimuli The audio stimuli were broadcasted from the twelve virtual sounds around the subject's head and arranged in a random combination of flat (4) and dynamic (4) repeat of sound (pink noise 35 Hz wav with a velocity of 22 cm/sec and a duration of 5.5 s). The flat sounds were of constant intensity while the dynamic sounds were of increasing intensity in order to simulate looming sounds toward the subject's sternum xiphoid process. The intensity level of the virtual audio stimuli surrounding the subject head was automatically generated by the 3D Tune-in Toolkit starting from twelve virtual sound sources positions located at 1.2 m from the subject's head. The coordinates were selected to represent the surface of a sphere with a radius R 1.2 m centered on the subject's sternum xiphoid process (see Table 1 for the virtual sounds' spatial positions). The Tactile Stimuli We ran in a Unity 3D calibration scene (10 trials of 60 trails) to calibrate the tactile stimuli based on subjects' perceptive vibration thresholds. A MATLAB script was developed to analyze the subject's perceptive level of vibration intensity (we choose the smallest number that scored 10 in the MATLAB summary). Each subject obtained this score with the maximum level of intensity. The signal decomposes into the sum of an attack signal which is a 200 Hz sinusoidal wave of amplitude +4dB and a decay signal of frequency 50 Hz and amplitude −22 dB. The tactile stimuli onset asynchrony (SOA) between audio and tactile stimuli were based on the individual subject's arm length and were calculated to be displayed at 6 distances (timepoints) around the subject's body D1: 0.7, D2: 0.9, D3: 1.1, D4: 1.3, D5: 1.5, and D6: 1.7 arm length distance from the subject's chest for a duration of 100 ms. The tactile vibrations were delivered on subjects' sternum xiphoid process location using the Woojer strap haptic belt which was connected to a sound card (USB 2.0 7.1 AUDIO SOUND BOX CM6206) enabling the display of the audio stimuli through the HTC Vive headset (computer sound setting was set to the maximum). Procedure On their arrival, subjects were asked to fill up their personal details (identity number, date of birth, gender, and level of fitness). We then manually took several other subjects' body measurements, which were reported onto the subject's personal form as we went along. Subjects' measurements were accomplished in the following set order: weight and height, head circumference (for the parametrization of the 3D Tune-in Toolkit), arm length (from acromionclavicular joint till the middle fingertip), interpupillary distance (IPD) (for the calibration of the headset), and pupillary height from the floor and root of the nose to sternum xyphoid process location (which will later serve to position the haptic belt onto the subjects' upper body). Before the start of the experiment, we ran in a Unity 3D calibration scene to assess the perceptive level of vibration intensity of each subject. To do so, the subject was equipped with the Woojer haptic belt (which was placed on his/her lower part sternum according to the previous measurement we took) and while standing still, we delivered different intensities of vibration through the haptic belt. The subject task was to press the HTC Vive hand controller trigger every time he would feel a vibration. The data were saved in a MATLAB folder and analyzed using a custom MATLAB script to later serve to configure the intensity parameters of the tactile stimuli in a Unity 3D game engine based on each subject's vibration perceptive intensity level. The experiment was performed in a room without a window with the air-conditioning set at a temperature of 22°C. Before the start of the experiment, the investigator explained the task instructions of the experiment to the subject, its workflow, and duration. Subjects were instructed to standstill at the same marked position in the center of the room and motor action were limited to responding manually as fast as possible using one of the HTC Vive hand controller trigger (the right one for right handed and left one for left handed) to tactile stimulus administered on his/her lower sternum by a haptic belt at different delays from the onset of task-irrelevant dynamic sounds, which gave the impression of a sound source looming toward him/her. Results were derived on subject's reaction time (RT) that was taken as subject's PPS proxy. The workflow of the experiment included a preliminary training session to familiarize the subject with the equipment and the task of the experiment followed by the experiment which consisted of four separate blocks with 24 trials per block (total 144 trials, number of total duration was 1.30 min at the most with a 15 min break after the first two blocks). After the subject was equipped with HTC Vive headset, which was adjusted according to his/her interpupillary distance and was given HTC Vive hand controllers, and the haptic belt strap was attached to his/her lower sternum location (based on our previous measures). The subject had to stand still in the middle of a square room of 4 × 4 m with his/her feet astride in parallel 40 cm apart as specified by two lines made with masking tape onto the floor and was told to keep this position during the entire experiment. Subsequently, the investigator customized the block parameters of the Unity 3D experiment using the subject's preliminary acquired measurements (subject identity number, arm length, nose to sternum distance, eyes height, and vibration calibration). The virtual scene consisted of an infinite black background where the subjects had to fixate a red cross target located in the middle of the virtual scene at a distance of 3 m which was visible for 0.5 s. When the subject's head was aligned correctly (less than 9.5 degrees of deviation from the target center), the red cross disappeared and only a black background remained. Thereafter, the experiment was initiated with the training session (10 trials) followed by four blocks of the experiment separated into two blocks each. The raw data results of the experiment were saved in a MATLAB folder for their analysis. Analysis We ran a preliminary analysis using a MATLAB script of 60 trails for the calibration of the tactile stimuli to ensure that the stimuli delivered were felt by all subjects. For each of the twelve directions of the virtual flat and dynamic sounds, the reaction times for the different distances were approximated with a sigmoidal function: Here, y(x) represents the RT in terms of distance of the tactile stimuli x. y min and y max are the lower and upper saturation values of the sigmoid. The central point of the sigmoid is x c , ymin+ymax 2 . It is also the point where the slope is maximal (and determined by 1/b). Abscissa x c represents the RT threshold and thus the distance between the sternum of the subject and the For each participant, we produced two 3D representations; one relying on the RT of dynamic sound stimuli and one on the RT of flat sound stimuli. We describe the obtained shapes and focus specifically on their possible anisotropy in both cases. RESULTS The results for each participant are encompassed within the 3D representations we obtained for both flat and dynamic stimuli. By connecting the RT thresholds in each of the twelve directions, we draw a spatial polyhedron which serves as an approximation for PPS and its boundaries in 3D. This polyhedron does not display symmetric properties. Further the anisotropy of the polyhedron for the eight subjects do not obey any systematic rule. This might reflect the small sample of our pilot. To our knowledge, the rendering of the phenomenological components of PPS in 3D has never been visually highlighted (see DISCUSSION Virtual audio stimuli can be modified according to many experimental parameters: type (flat or dynamic and pink noise or white noise), velocity, distance, and direction. One of the major problems in the use of an audio setup to estimate the borders of PPS is sound reverberation. Reverberation energy ratio depends on the shape, size, and physical material of the room. By using the 3D Tune-in Toolkit where the signal is anechoic, these effects were nullified. The comparison of subjective boundaries of PPS relatively to flat and dynamic audio stimuli shows across 52 pairs subject/ trajectory, and the threshold of RTs is the closer one in the case of flat stimuli 26 times and the closer one for dynamic stimuli 26 times (see Table 2). We did not notice like several authors argued Serino et al., 2015b) that dynamic incoming sounds affected audio-tactile interactions predominantly compare to flat ones. In the literature, it appears that PPS responses rely not only on stimuli proximity but also on the velocity parameters. Multisensory neural adaptation mechanism involved in PPS responses may not work for stimuli above 100 cm/s (Noel et al., 2020) because of lack of rapid recalibration. The audio stimuli velocity of our setup was within the 22 cm/s range and cannot explain the non-concluding results between flat and dynamic audio stimuli. Boundaries of subjective PPS are obtained as thresholds in sigmoidal approximations of points corresponding to tactile stimuli distances (see Figures 2, 3). These distances represent between 25% and 129% of the arm length of the subject and the threshold should lie between these two extreme values (see Table 3). In some cases, the threshold is beyond or below these values (see. Figure 2). The interpretation is that sigmoidal approximation does not put in evidence as threshold for these data. This might be explained by the methodology of some studies which are removing from their results those participants with bad sigmoidal fit (Holmes et al., 2020). Furthermore, since we used stimuli which belong to the far category (130 cm), this could also be an effect of the variability of the multisensory responses between close vs. far audio-tactile stimuli which has been demonstrated by Serino to show higher variability for far stimuli than close ones (Serino, 2016). Individual variability in PPS might be explained by the various cultural and ethnic factors. Indeed our subject were recruited among Weizmann international students. Indeed, the impact of the ethnic background on peripersonal size and shape has been recently acknowledged (Yu et al., 2020). The stimuli factor proximity has been widely explored contrarily to the factor of the movement of the stimuli direction (Bufacchi and Iannetti, 2018). Thus, we introduced a setup that would test virtual audio stimuli in various directions. We observed that the representations computed from these results did not feature systematic anisotropy of the PPS 3D boundary in one direction relatively to others. Sometimes, anisotropic properties can be the result of gravitational forces (Bufacchi et al., 2018). By implementing our setup in a more systematic experiment than our experimental pilot, it should be possible to infirm or confirm isotropy of subjective PPS. In further studies, this question could also be refined by testing other sound directions by reorienting the virtual sound sources and shifting the location of the tactile-given stimuli. Previous studies have already examined peripersonal space boundaries around the trunk, face , feet (Stone et al., 2018), and the soles of the feet (Amemiya et al., 2019). In this experiment, the virtual audio stimuli were oriented toward the sternum. In prospective studies, we could test different reaction times taking into consideration the virtual audio stimuli directional orientation toward other body centers. We could then expect the various anisotropic properties of PPS boundaries to be associated to other body centers. CONCLUSION The originality of this phenomenological and behavioural approach was to provide representations of the audio-tactile boundaries of PPS in 3D for each participant using a VR setup. 3D peripersonal space had been investigated around the hand, face, and trunk but to our knowledge comprehensive 3D spatial representations of PPS around subjects' body had not yet been rendered. Although we could not establish a clear distinction between RT responses between flat and dynamic stimuli, we have to keep in mind that this setup is an experimental pilot run on a limited number of participants. Nevertheless, this setup benefit is its high flexibility, which could allow in the future to extend the experimental conditions further. For instance, not only the audio stimuli parametrization (velocity and type of sound) could be modified but also the localization of the audio stimuli in 3D in any direction and toward any body center, which could motivate an indepth study. Prospectively, our apparatus could serve the purpose of setting up comfort distances in a social VR platform (i e., AltspaceVR, High Fidelity, and NEOS Metaverse Engine). Indeed, avatar embodiment can heightened feeling of space violation. This would enable tailored-made configuration of the interpersonal spaces, which in effect would facilitate the nonverbal social interactions through body gestures and the spatial positioning of avatars in a social VR platform. This setup could be imported within the social VR apps to customize the personal space of the gamer's avatar which would improve the user experience (UX) and prevent virtual harassment issues. Additionally, in the present, the literrature regarding mental pathologies (i e., anorexia nervosa, autism spectrum, and schizophrenia) regarding the limits of PPS has been evaluated mainly frontally. Therefore, a finer 3D topographical definition of PPS could provide a more precise understanding of the distorted body schemas involved in the production of these altered PPS spatial representations. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author. ETHICS STATEMENT The studies involving human participants were reviewed and approved by Institutional Review Board (IRB) of the Weizmann Institute. The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS FL created the experimental design of the setup and conducted the experiments. AB wrote the MATLAB script for the experiment and OK processed the data. FL and GT analyzed and discussed the results. TF supervised the research. FUNDING All the authors are supported by the Feinberg Institute of the Weizmann Institute of Science, Rehovot, Israel. The first author has additional funding from the Braginsky Center of Art and Science of WIS. The second author is supported by the Israel Science Foundation (grant No 1167/17) and the European Research Council (ERC) under the European Union Horizon 2020 research innovation program (grant agreement No. 802107).
2021-05-14T13:33:22.203Z
2021-05-13T00:00:00.000
{ "year": 2021, "sha1": "07416b2ec05d294748f986cdc1c6bfdba0d22ccb", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/frvir.2021.644214/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "07416b2ec05d294748f986cdc1c6bfdba0d22ccb", "s2fieldsofstudy": [ "Psychology", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
23355326
pes2o/s2orc
v3-fos-license
Long-Lasting Protective Immune Response to the 19-Kilodalton Carboxy-Terminal Fragment of Plasmodium yoelii Merozoite Surface Protein 1 in Mice ABSTRACT Merozoite surface protein 1 (MSP1) is the major protein on the surface of the plasmodial merozoite, and its carboxy terminus, the 19-kDa fragment (MSP119), is highly conserved and effective in induction of a protective immune response against malaria parasite infection in mice and monkeys. However, the duration of the immune response has not been elucidated. As such, we immunized BALB/c mice with a standard four-dose injection of recombinant Plasmodium yoelii MSP119 formulated with Montanide ISA51 and CpG oligodeoxynucleotide (ODN) and monitored the MSP119-specific antibody levels for up to 12 months. The antibody titers persisted constantly over the period of time without significant waning, in contrast to the antibody levels induced by immunization with Freund's adjuvant, where the antibody levels gradually declined to significantly lower levels 12 months after immunization. Investigation of immunoglobulin G (IgG) subclass longevity revealed that only the IgG1 antibody level (Th2 type-driven response) decreased significantly by 6 months, while the IgG2a antibody level (Th1 type-driven response) did not change over the 12 months after immunization, but the boosting effect was seen in the IgG1 antibody responses but not in the IgG2a antibody responses. After challenge infection, all immunized mice survived with negligibly patent parasitemia. These findings suggest that protective immune responses to MSP119 following immunization using oil-based Montanide ISA51 and CpG ODN as an adjuvant are very long-lasting and encourage clinical trials for malaria vaccine development. Malaria is a major infectious disease that results in severe morbidity and mortality. Recently, it has been estimated that 2.2 billion people worldwide are exposed to Plasmodium falciparum and 515 (ranging from 300 to 660) million individuals had clinical episodes of malaria in 2002 (24). Many factors are involved in this burden of malaria, such as the appearance of drug-resistant strains of Plasmodium, both P. falciparum and P. vivax, insecticide-resistant Anopheles mosquito vectors, and the lack of an effective malaria vaccine (8). Many malaria vaccine candidates have been developed, and some of them are being tested in ongoing clinical trials (6,7). Merozoite surface protein 1 (MSP1) is a leading malaria vaccine candidate. It is produced during schizogony and merozoite maturation. MSP1 is composed of many fragments, and only its small 19-kDa fragment at the carboxyl terminus (MSP1 19 ) is carried into newly uninfected erythrocytes (2). MSP1 19 is highly conserved and is composed of two epidermal growth factor-like domains which contain protective epitopes (5,17). In previous studies, it was shown that immunization with recombinant MSP1 19 of Plasmodium falciparum or P. yoelii protects monkeys or mice, respectively, against infection (5,10,15,17). Our studies have also shown that protection is correlated with high levels of MSP1 19 -specific antibodies at the time prior to challenge infection, but not with effector T cells or other accessory factors associated with cell-mediated immunity (10,11). Passive transfer of MSP1 19 -immune serum has demonstrated that while an active immune response postinfection is necessary for protection against lethal malaria (12), its specificity for MSP1 19 is not required for protection (29). CpG oligodeoxynucleotides (ODNs) have extensive ability to activate the innate and adaptive immune responses (14) via binding to Toll-like receptor 9 (9). Activation of dendritic cells by CpG ODN induces cell maturation and production of proinflammatory cytokines, such as interleukin 1 (IL-1), IL-6, tumor necrosis factor alpha, and type I interferon, as well as Th1promoting cytokine IL-12 (1,25). CpG ODNs have been found to be useful as adjuvants for peptide/protein vaccines against various pathogens, including malaria parasite antigens (13,16,19,26). The results of our previous studies have demonstrated that CpG ODN in combination with Montanide ISA51 or ISA720 strongly promotes MSP1 19 in induction of specific antibody response and protection against a lethal malaria infection in mice (13). However, the longevity of the antibody response to MSP1 19 has not been studied. In this study we investigate how long the MSP1 19 -specific antibody response lasts following immunization with recombinant P. yoelii MSP1 19 formulated with CpG ODN in Montanide ISA51 and the kinetics of the antibody isotype responses, as well as the degree of protection. Mice and parasites. Female BALB/c mice, 6 to 8 weeks of age at the start of the experiments, were purchased from the National Laboratory Animal Centre, Mahidol University, Salaya, Nakhon Prathom, Thailand. Plasmodium yoelii YM, a lethal murine malaria parasite, was maintained in our laboratory and used for challenge infection. Recombinant MSP1 19 ) according to the instructions of the manufacturer (Eastman Kodak, Scientific Imaging Systems). The recombinant protein was purified using an anti-FLAG M1 antibody gel column (Sigma), and its purity was demonstrated by sodium dodecyl sulfate-polyacrylamide gel electrophoresis by formation of a single band (13). Adjuvants. CpG ODN 1826 (TCCATGACGTTCCTGACGTT; the two C6 motifs are underlined) used in this study was kindly provided by A. M. Krieg, Coley Pharmaceutical Group. Montanide ISA51 was a kind gift from SEPPIC, France. Complete Freund's adjuvant (CFA) and incomplete Freund's adjuvants (IFA) were obtained from Sigma. Immunization protocol. Mice were immunized subcutaneously with an emulsion of the mixture of one part of phosphate-buffered saline (PBS) or 20 g of recombinant MSP1 19 plus 50 g CpG ODN 1826 and one part of Montanide ISA51 or of one part of PBS or the antigen and one part of CFA. On days 21, 48, and 56, mice were boosted with the same amount of antigen plus CpG ODN in Montanide ISA51 or with the antigen plus IFA via subcutaneous, intraperitoneal (i.p.), and i.p. injections, respectively (10,13). Antibody assay. Sera were collected 2 weeks after the last immunization and then every month for the assessment of MSP1 19 -specific immunoglobulin G (IgG) antibody and antibody subclasses by enzyme-linked immunosorbent assay (ELISA) as described previously (13). Briefly, MaxiSorb immunoplates (Nunc, Denmark) were coated with 100 l of 0.5 g/ml MSP1 19 in coating buffer overnight at 4°C. After three washes with 0.05% Tween 20 in PBS, wells were blocked by the addition of 200 l of PBS containing 1% bovine serum albumin at 37°C for 1 h. Supernatants were discarded, and 100-l amounts of twofold serial dilutions of serum were added to the wells. After incubation for 1 h, the wells were washed, and then 100 l of horseradish peroxidase-conjugated goat anti-mouse IgG (Zymed Laboratories, Inc.) diluted 1/3,000 was added to each well. For antibody subclass determination, after incubation with sera and washing the well, 100 l of horseradish peroxidase-conjugated anti-mouse IgG1 or IgG2a (Zymed Laboratories, Inc.) diluted 1/1,000 was added to each well. After incubation for 1 h, wells were washed and 100 l of o-phenylenediamine dihydrochloride (OPD; Sigma) substrate solution was added to each well. The plate was incubated at room temperature for 30 min, and then 100 l of 1 N H 2 SO 4 was added to stop the reaction. The plate was read for optical density (OD) at 490 nm using an ELISA reader. The antibody titers were judged as the highest dilution of serum for which the OD was equal to or greater than the mean OD of healthy control sera. Challenge infection. Mice were challenged intravenously with 1 ϫ 10 4 live P. yoelii YM-parasitized red blood cells (YM-pRBC). Parasitemia was monitored daily by microscopic examination of Dip-Quick-stained blood films, counting at least 10,000 RBC before declaring a slide to be negative. Statistical analysis. The significance of differences between values was determined by Student's t test of Sigma Plot window version 9.0 (SPSS). Immunization with recombinant MSP1 19 plus CpG ODN in Montanide ISA51 induces strong antibody responses. We have shown previously that immunization with MSP1 19 formulated with CpG ODN 1826 in Montanide ISA51 (CpG ODN/ISA) induces a very high antibody response and confers complete protection against P. yoelii YM infection (13). In the current study we used this immunization regimen to investigate the duration of the protective immune response. For comparison, we used CFA or IFA as an adjuvant. Mice were immunized with four injections of MSP1 19 mixed with CpG ODN/ISA at days 1, 21, 42, and 56. Fourteen days after the last immunization, sera were collected and assayed for MSP1 19 -specific IgG antibody by ELISA. Results showed that IgG antibody responses in mice immunized with MSP1 19 and CpG ODN/ISA were higher than those in mice immunized with MSP1 19 in CFA/IFA as demonstrated by OD of sera at the dilution of 1/1,000,000 (mean OD Ϯ standard error [SE], 1.260 Ϯ 0.169 versus 0.351 Ϯ 0.044; P Ͻ 0.001) (Fig. 1A) and by antibody titers (geometric mean Ϯ SE, 7.204 Ϯ 0.199 versus 6.401 Ϯ 0.072; P Ͻ 0.01) (Fig. 1B). Furthermore, the IgG1 and IgG2a antibody subclass responses were higher following immunization with MSP1 19 plus CpG ODN/ISA compared to the use of CFA/IFA as an adjuvant (geometric mean Ϯ SE of IgG1 antibody titers, 7.330 Ϯ 0.283 versus 6.693 Ϯ 0.177; P Ͻ 0.01) (geometric mean Ϯ SE of IgG2a antibody titers; 6.305 Ϯ 0.748 versus 4.407 Ϯ 0.427; P Ͻ 0.01) (Fig. 1C). To compare the potency of CpG ODN/ISA and CFA/IFA in initiating IgG1 and IgG2a antibody production, we converted the log titers to the original antibody titers, i.e., log titer of 7.330 was converted to 21,374,698 and 6.693 to 4,926,517 for IgG1 antibodies and from 6.305 to 2,016,043 and 4.407 to 25,541 for IgG2a antibodies and then determined the ratio for each subclass. We found that for MSP1 19 19 -specific antibody lasts, after the last immunization, sera were collected and antibody levels measured every month for 12 months. We found that the MSP1 19specific antibody titers in mice immunized with MSP1 19 plus CpG ODN/ISA persisted without any significant change over 12 months (geometric mean Ϯ SE, 6.687 Ϯ 0.261 versus 6.904 Ϯ 0.147 at 1 and 12 months, respectively, after the last immunization) (Fig. 2). In contrast, the antibody titers in mice immunized with MSP1 19 in CFA/IFA gradually decreased over the 12 months after the last immunization, and compared to the antibody levels in the first month after immunization, the antibody levels at 12 months were significantly lower (geometric mean Ϯ SE, 6.465 Ϯ 0.118 versus 5.772 Ϯ 0.100 at 1 and 12 months, respectively; P Ͻ 0.0001) (Fig. 2). We also investigated the duration of MSP1 19 -specific IgG1 and IgG2a antibody subclass responses for 12 months after immunization. Interestingly, the IgG1 antibody responses in both groups of mice immunized with MSP1 19 plus CpG ODN/ ISA and MSP1 19 in CFA/IFA decreased continuously, and the levels at 6 and 12 months were significantly lower than the levels at 1 month after immunization (mean OD Ϯ SE, 2. (Fig. 3B). However, the titers of antibody following immunization with CpG ODN/ISA were always higher than those after immunization with CFA/IFA. 19 -specific immune response against P. yoelii challenge infection. Having shown above that immunization with MSP1 19 plus ODN/ISA or CFA/IFA induced and maintained the vaccine-specific antibody for at least 12 months, we then examined whether the immunity was protective by the end of this period. Mice were challenged with 1 ϫ 10 4 P. yoelii YM-pRBC intravenously, and parasitemia was monitored. It was found that all mice immunized with MSP1 19 plus CpG ODN/ISA survived infection with no detectable parasitemia (Fig. 4D). Mice immunized with MSP1 19 plus CFA/ IFA were less protected; one mouse was completely protected, two mice experienced patent parasitemia (Ͻ1%) before they recovered from the infection, and one mouse showed delayed infection for 7 days and died with high parasitemia by day 18 postinfection (Fig. 4C). All control mice immunized with PBS plus CpG ODN/ISA or CFA/IFA died within 8 days with high parasitemia, except one mouse immunized with PBS in CFA/ IFA that recovered after being infected in the same manner as the other control mice (Fig. 4A and B). DISCUSSION Vaccines need to be highly efficacious in protection against a pathogen and induce long-lasting protective immune response. Our previous studies have shown that immunization of mice with MSP1 19 formulated with Freund's adjuvants induces high titers of antibody, which completely protects against a lethal malaria infection (10). Recently, we have demonstrated that the formulation of MSP1 19 with CpG ODN 1826 and Montanide ISA51 or ISA720 induces a more effective antibody response and protection against lethal malaria infection compared to the formulation of MSP1 19 with Montanide ISA51 or ISA720 alone, or with Freund's adjuvant, suggesting that CpG ODN plays a role in immunological enhancement (13). Here in this study, we then examined the longevity of MSP1 19 -specific immunity induced by immunization with MSP1 19 formulated with CpG ODN 1826 in Montanide ISA51. We also used the vaccine formulation with CFA/IFA for comparison. The results showed that total IgG antibody specific for MSP1 19 and protection lasted over 12 months after immunization and that the Th1-dependent IgG2a antibody response persisted stably, while the Th2-dependent IgG1 antibody response declined over the period of time in either vaccine formulation. Freund's adjuvant is an oil-based immunostimulatory agent, effectively used in animal immunization to initiate a protective immune response. The use of this adjuvant in a formulation with MSP1 19 induces high MSP1 19 -specific antibody response and confers complete protection against P. yoelii YM infection (10). The use of Freund's adjuvant is not allowed for human vaccination because of its toxicity. Montanide ISA51 is an oil-based adjuvant of higher purity than Freund's adjuvant and has been demonstrated to be safe for use in humans (20). In a mouse system, MSP1 19 formulated with Montanide ISA51 induces protection against P. yoelii infection even though some mice experienced some patent parasitemia, but in the presence of CpG ODN 1826, complete protection is obtained. This enhancement of protection is correlated with increases in the IgG1 and IGg2a subclass responses (13). The increases of both antibody subclasses are also greater than those induced by using Freund's adjuvant, which we again have confirmed in this study (Fig. 1). Moreover, the increase of IgG2a antibody titers was much greater than that of IgG1 antibody levels, leading to the suggestion that the formulation with CpG ODN in Mon-tanide ISA51 preferentially stimulates the Th1-dependent antibody response. In this study, MSP1 19 -specific IgG antibody titers of four mice at 12 months following the last immunization with MSP1 19 plus CFA/IFA were 421,599, 700,489, 621,767, and 664,660, and after challenge infection, the mice had prepatent periods of 11, 8, 10, and 6 days with peak parasitemia of 0.1, 1.17, 0.44, and Ͼ47% (death), suggesting that the prechallenge antibody titers are critical for the protection outcome. Those antibody titers were decreased about fivefold compared to the antibody titers at first month postimmunization. These results are consistent with the results of our previous studies, which demonstrated that mice immunized with MSP1 19 plus CFA/IFA with prechallenge MSP1 19 -specific antibody titers of Ͼ6,400,000 were completely protected from challenge infection (10), and that as demonstrated by intranasal immunization, the antibody titers of Ͻ640,000 could not confer protection against infection and mice died with high parasitemia, whereas mice that had antibody titers from 640,000 to 2,256,000 experienced little patent parasitemia (Ͻ1%) and survived infection (11). In both CpG ODN/ISA51 and CFA/IFA groups, MSP1 19specific IgG1 subclass levels continuously decreased, whereas the IgG2a subclass stably persisted over 12 months postimmunization (Fig. 3). This led to a question why the decrease of IgG1 antibody did not affect the total IgG antibody level, particularly in CpG ODN/ISA51 group (Fig. 2). This may be explained by the following reasons. First, both IgG1 and IgG2a antibody titers produced by the CpG ODN/ISA51 group were much higher than the titers produced by the CFA/IFA group; IgG1 titers were 21,374,698 and 4,926,517, and IgG2a titers were 2,016,043 and 25,541, respectively, making the IgG1 and IgG2a ratios of 11 (21, 25,541), respectively. The IgG2a antibody titer for CpG ODN/ ISA51 group was about 80-fold higher than that for the CFA/ IFA group. Second, the other IgG subclasses such as IgG2b may also be produced prominently, as has been demonstrated by Kumar et al. (16). Therefore, the loss of some IgG1 antibody but with the persistence of high levels of IgG2a and IgG2b antibodies (Th1 type response) may not significantly affect the total IgG antibody titer for the CpG ODN/ISA51 group. Third, we think that analysis of antibody level by measurement of OD at one dilution of serum would be more sensitive than the assessment of antibody titer. Immunization with four doses of MSP1 19 with either CFA/ IFA or CpG ODN/ISA as an adjuvant has been shown to give complete protection against P. yoelii infection (10,13). In this paper, we have further shown that those immunizations can give rise to long-term protection, which persists for at least 12 months, suggesting that anamnestic immunization is critical for long-lasting immune responses. Fewer doses of immunization which yielded lower antibody levels and less protection (10) would be less effective in maintenance of the antibody response. Our recent unpublished data showed that a single-dose immunization (one injection) with MSP1 19 in CpG ODN/ISA or CFA was able to yield an antibody level comparable to the protective antibody level induced by four-dose immunization, but the level waned significantly within 22 weeks. Long-term response of MSP1 19 -specific antibody may reflect the generation and maintenance of long-lived plasma cells which survive for years (18,22). However, it has been recently demonstrated that mouse memory B cells and long-lived plasma cells specific for P. yoelii MSP1 19 were deleted by apoptosis during malaria infection (30). Our data provide new insight into the role of MSP1 19 -specific antibody response by showing that MSP1 19 -specific IgG2a antibody persists constantly longer than IgG1 antibody ( Fig. 3A and B) and suggest that the memory of MSP1 19 -specific IgG2a antibody may last longer than the IgG1 memory. Despite the success of immunization in mice, some researchers have raised the arguments that multiple injections may be difficult for use in humans, particularly in younger children who are many of the targets, that the i.p. route is not suitable for human injection and an alternative route is needed, and that another CpG ODN(s) which is the most potent adjuvant in humans would be sought to achieve protective immunity. Therefore, research on minimizing doses (to one or two injections) and natural boosting by malaria infection of the vaccinated individuals in areas where malaria is endemic would aid malaria vaccine development. There is evidence that MSP1 19 -specific IgG2a antibody plays an important role in immunity against malaria infection (23). First, our previous study demonstrated that immunization of mice with MSP1 19 plus CpG ODN and Montanide ISA720 increased MSP1 19 -specific IgG2a antibody titers 15-fold but did not alter the MSP1 19 -specific IgG1 antibody titers compared to the values for immunization with MSP1 19 in Montanide ISA720 alone. After infection with P. yoelii YM, mice immunized with MSP1 19 plus CpG ODN and Montanide ISA720 were completely protected with no parasitemia detectable over 24 days of observation, while mice immunized with MSP1 19 in Montanide ISA720 alone succumbed to infection and all died with high parasitemia (13). Second, transferring immune sera depleted of IgG2a could not confer protection in recipient mice following Plasmodium chabaudi infection, but the immune serum depleted of IgG1 could (26). However, the mechanism of MSP1 19 -specific IgG2a antibody in mediating protection is not certainly known. IgG2a subclass prefers to bind Fc␥RI and then mediate antibody-dependent cell-mediated cellular cytotoxicity, antibody-dependent cellular inhibition, and phagocytosis, but the MSP1 19 -specific IgG2a antibody has been reported not to use the Fc function for antibody-mediated protection (21,27). It may function by blocking parasite invasion or inhibiting MSP1 processing, which is required for erythrocyte entry (3,28). In summary, we have demonstrated that MSP1 19 immunization formulated with CpG ODN and Montanide ISA51 induces long-term antibody responses and confers complete protection against blood-stage malaria parasite infection. Second, MSP1 19 -specific IgG2a antibody stably persists longer than the IgG1 antibody. Recently, Montanide ISA51 and CPG 7909, a B-class CpG ODN, used separately in human vaccine trials are well-tolerated and enhance vaccine immunogenicity (4,20). The combination of Montanide ISA51 and CpG ODN adjuvants should be tested in a clinical trial.
2017-09-27T02:25:59.552Z
2007-02-21T00:00:00.000
{ "year": 2007, "sha1": "7c850d51987d92d27e7a793661687a021d4dfc57", "oa_license": null, "oa_url": "https://doi.org/10.1128/cvi.00397-06", "oa_status": "GOLD", "pdf_src": "Highwire", "pdf_hash": "39d29717b75c5f702718e16b0eedae81256c009e", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
231678015
pes2o/s2orc
v3-fos-license
Structural basis of trehalose recognition by the mycobacterial LpqY-SugABC transporter The Mycobacterium tuberculosis (Mtb) LpqY-SugABC ATP-binding cassette transporter is a recycling system that imports trehalose released during remodeling of the Mtb cell-envelope. As this process is essential for the virulence of the Mtb pathogen, it may represent an important target for tuberculosis drug and diagnostic development, but the transporter specificity and molecular determinants of substrate recognition are unknown. To address this, we have determined the structural and biochemical basis of how mycobacteria transport trehalose using a combination of crystallography, saturation transfer difference NMR, molecular dynamics, site-directed mutagenesis, biochemical/biophysical assays, and the synthesis of trehalose analogs. This analysis pinpoints key residues of the LpqY substrate binding lipoprotein that dictate substrate-specific recognition and has revealed which disaccharide modifications are tolerated. These findings provide critical insights into how the essential Mtb LpqY-SugABC transporter reuses trehalose and modified analogs and specifies a framework that can be exploited for the design of new antitubercular agents and/or diagnostic tools. The Mycobacterium tuberculosis (Mtb) LpqY-SugABC ATPbinding cassette transporter is a recycling system that imports trehalose released during remodeling of the Mtb cell-envelope. As this process is essential for the virulence of the Mtb pathogen, it may represent an important target for tuberculosis drug and diagnostic development, but the transporter specificity and molecular determinants of substrate recognition are unknown. To address this, we have determined the structural and biochemical basis of how mycobacteria transport trehalose using a combination of crystallography, saturation transfer difference NMR, molecular dynamics, site-directed mutagenesis, biochemical/biophysical assays, and the synthesis of trehalose analogs. This analysis pinpoints key residues of the LpqY substrate binding lipoprotein that dictate substratespecific recognition and has revealed which disaccharide modifications are tolerated. These findings provide critical insights into how the essential Mtb LpqY-SugABC transporter reuses trehalose and modified analogs and specifies a framework that can be exploited for the design of new antitubercular agents and/or diagnostic tools. Tuberculosis (TB), caused by the bacterial pathogen Mycobacterium tuberculosis (Mtb), is now the leading cause of death from a single infectious agent world-wide claiming over 1.5 million lives each year (https://www.who.int/teams/globaltuberculosis-programme/tb-reports). Mtb is a highly successful intracellular pathogen, which has co-evolved over thousands of years to enable it to adapt within the human host and develop highly effective strategies to persist and survive (1). To thrive within this nutrient-restricted host environment, Mtb must access scarce energy sources; however, the precise nutritional requirements of Mtb and the mechanisms of assimilation are poorly understood (2,3). Unraveling the processes and transporters in Mtb involved in nutrient scavenging and the import of these critical energy sources should lead to new intervention strategies to combat this major global pathogen. For many pathogens, carbohydrates are critical carbon sources for the production of energy and essential biomolecules, which are required for a wide range of cellular processes. However, the diversity and availability of sugars to Mtb during infection remain largely unclear (2,3). Trehalose (α-D-glucopyranosyl-α-D-glucopyranoside, α,α-trehalose) is an unusual nonmammalian disaccharide that is highly abundant in mycobacteria (4). Trehalose-containing glycolipids are major components of the mycobacterial cell envelope that contribute to the virulence of the Mtb pathogen and provide an extracellular source of "free" trehalose which can be used as a carbon and energy source (5)(6)(7). Trehalose is released either through the hydrolysis of trehalosecontaining glycolipids by serine esterases or during the assembly of the mycobacterial cell envelope mediated by the antigen 85 complex (8)(9)(10). Recent studies in mutant strains of Mtb have demonstrated that the LpqY-SugABC (Rv1235-Rv1238) ATP-binding cassette (ABC) transporter recognizes trehalose and enables the recovery and recycling of this liberated cell wall disaccharide that would otherwise be lost (6). Mutants of Mtb that lack functional components of the LpqY-SugABC importer are attenuated in mice infection models demonstrating the critical importance of trehalose uptake for Mtb virulence (6). Given that trehalose import is fundamental for virulence and essential for Mtb to survive, the Mtb trehalose transporter is an attractive target for inhibitor design. Despite the importance of trehalose uptake in mycobacteria, the molecular details that govern how this disaccharide are recognized and whether alternative sugars are substrates for this recycling system remain unresolved. Some understanding into the substrate preference of this mycobacterial ABC-transporter can be obtained from studies which found that modified trehalose analogs retaining the α1-1-glycosidic linkage are actively imported by the LpqY-SugABC recycling system and metabolically incorporated into the trehalose-mycolates located within the cell envelope (11)(12)(13). Whether the mycobacterial LpqY-SugABC transporter is able to facilitate the import of alternative, more diverse, sugars is not yet known. Here, we have used a combination of chemical, biochemical, and biophysical approaches to describe the functional and structural characterization of the mycobacterial LpqY substrate binding domain of the LpqY-SugABC ABC transporter and reveal its substrate specificity and the molecular framework that underpins the recognition of trehalose and related substrates. These findings offer fundamental insights into how mycobacteria recognize and import trehalose, a critical process in virulence and survival of the Mtb pathogen. Production of Mtr LpqY The optimal conditions for the production of Mtb LpqY and mycobacterial LpqY homologs were explored extensively in Escherichia coli. This yielded LpqY from Mycobacterium thermoresistible (Mtr), which has high sequence identity (72%) to Mtb LpqY at the amino acid level (Fig. S1). Soluble Mtr LpqY protein was readily obtained and purified using Ni 2+affinity and size-exclusion chromatography (Fig. S2), and the identity of the Mtr LpqY protein was confirmed by mass spectrometry. Substrate specificity of Mtr LpqY To establish whether the LpqY-SugABC transporter is specific for trehalose or is instead promiscuous for other carbohydrates, a panel of monosaccharides and disaccharides (10 mM) were screened for their ability to stabilize the melting temperature (T m ) of the Mtr LpqY substrate binding domain. In total, 62 potential substrates were probed, and the observed change in the melting temperature (ΔT m ) of Mtr LpqY was assessed, which can be indicative of binding (Fig. 1, Figs. S3 and S4). Notably, trehalose resulted in the highest thermal shift (ΔT m 11.5 C) of Mtr LpqY relative to the protein alone and compared with all substrates tested, indicating that Mtr LpqY has a clear preference for this physiologically relevant sugar (Fig. S3). To validate our findings from the thermal shift screening the binding interactions of Mtr LpqY were assessed with the disaccharide substrates that resulted in the largest thermal shift. Isothermal titration calorimetry (ITC) experiments were performed with the preferred trehalose substrate and also with galactotrehalose, which was found to result in a 2-fold reduction in the ΔT m of Mtr LpqY compared with trehalose. ITC analysis revealed a 1:1 binding stoichiometry for both sugars:Mtr LpqY and equilibrium dissociation constants (K d ) of 1.1 ± 0.04 μM and 2.1 ± 0.2 μM respectively (Fig. S5). This is in agreement with the range of reported K d values determined by ITC for substrate binding domains of other ABC transporters (15,16), with a K d value of 13 μM reported for an αglycoside ABC transporter from Thermus thermophilus (17). This provides direct evidence that this mycobacterial importer is a high-affinity trehalose transporter. We also tested the binding affinity of trehalose by microscale thermophoresis (MST) and confirmed that binding is also in the micromolar range, with an observed K d value of 72 μM ( Table 1, Fig. S6). Given that MST consumes significantly less protein than ITC, we therefore used the MST assay to evaluate the binding affinities of the other sugar substrates. The K d values obtained are reported in Table 1. Among all of the substrates tested, we were able to determine binding affinities for 2 H-trehalose, 2-, 3-, 4-and 6-azido-trehalose, galactotrehalose, mannotrehalose, and kojibiose. As expected, the K d value for the deuterated 2 Htrehalose analog was comparable to trehalose, whereas the modified trehalose derivatives displayed slightly weaker binding affinities. This is consistent with the use of azido-modified trehalose tools developed to evaluate trehalose metabolism in mycobacteria (13). In these studies, we observed that the asymmetric epimeric analogs, galactotrehalose and mannotrehalose, showed an 3 and 12-fold reduction in binding affinity respectively. These findings are compatible with our recent studies in Mycobacterium smegmatis which, unexpectedly, showed that 6-azidogalactotrehalose is incorporated into the mycobacterial cell envelope with a similar efficiency as 6-azido-trehalose via the M. smegmatis LpqY-SugABC transporter (12,13). This result indicates that Mtr LpqY is able to tolerate epimerization of the hydroxy group at the 4-position, whereas epimerization at the 2-position is less favored. The preference for the α1-1 glycosidic bond was further confirmed through evaluation of alternative α-glycoside disaccharides. The binding affinity for these analogs could only be determined for kojibiose (α1-2) under these assay conditions with a 30-fold increase in the K d. value observed. We were unable to obtain reliable K d values for nigerose and isomaltose because of low signal to noise ratios, suggesting that sugars with α1-3 and α1-6 glycosidic bonds have reduced binding affinities and are not recognized. Finally, we did not observe binding to glycerophosphocholine which is the substrate of the Mtb UgpABCE ABC transporter (Fig. S3) (18), indicating that each Mtb carbohydrate importer has a distinct substrate preference and are only able to accept minor structural modifications (18,19). Taken together, these data establish that the Mtr LpqY substrate binding protein is highly specific for trehalose. Co-crystal structure of Mtr LpqY with trehalose To determine the molecular and structural basis of trehalose recognition, we solved the crystal structure of Mtr LpqY with the trehalose substrate present (Fig. 2). The Mtr LpqYtrehalose complex crystallized in space group P4 1 2 1 2 1 , and the structure was determined by exploiting the anomalous signal from iodide-soaked crystals. The model containing bound iodine ions was then used as a search model to solve the structure of a native, higher-resolution, data set by molecular replacement, and the final Mtr LpqY-trehalose complex structure was refined at a resolution of 1.7 Å to an R work of 16.9% and R free of 19.4% (see Table S1 for the data collection and refinement statistics). Two Mtr LpqY protein molecules are present within the asymmetric unit. Structural superposition indicates that each subunit is equivalent, aligning with a r.m.s.d. of 0.45 Å over all residues, whereas crystal packing and analysis of the crystal packing interfaces indicates that Mtr LpqY does not form dimers or higher oligomers (20,21). This is consistent with our solution size-exclusion studies where Mtr LpqY is found as a monomer (Fig. S2D). Therefore, it is likely that the monomer is the biologically relevant unit, which is consistent with the known oligomeric state of substrate binding domains from other ABC transporters (22,23). Overall structure of the Mtr LpqY-trehalose complex The overall architecture of Mtr LpqY is typical of substratebinding domains of ABC transporters, consisting of two Structure/function of Mtr-LpqY globular α/β domains joined by a hinge region (23), which in this instance is formed from three flexible loops. Domain I (residues 14-132 and 335-400) and domain II (residues 137-317 and 409-448) both comprise of a central β-sheet that is flanked on both sides by α-helices. The two lobes are connected via three flexible loops: Thr132-Leu137 (loop 1), Ala317-Leu335 (loop 2), and Asn400-Val409 (loop 3). The Mtr LpqY-trehalose complex adopts a closed conformation, which is further stabilized through a central arginine residue within the hinge region (Arg404, loop 3) that forms interdomain hydrogen bonds with the carbonyl oxygen of Leu332 located on loop 2 and the carboxylate of Glu179 from domain 2 as well as being directly involved with substrate binding. Ligand binding site of Mtr LpqY The trehalose molecule was clearly defined in the electron density and resides within the acidic binding cleft formed between domains I and II and interacts with residues from both domains (Fig. 2, Fig. S7). Trehalose comprises two glucopyranosyl units connected by a α1,1-glycosidic bond. In the Mtr LpqY structure, both of the glucose rings adopt a classical 4 C 1 chair conformation with almost equivalent dihedral angles across the glycosidic bonds (r H : 63.2 , ψ H 65.4 ), thus having rotational symmetry about the central glycosidic oxygen atom, mimicking the conformation of anhydrous trehalose crystallized in the absence of protein (24). In Mtr LpqY, trehalose is orientated such that one glucose molecule (Glc-1) is buried at the base of the binding cleft in close proximity to the hingeregion containing Arg404, whereas the second glucose molecule (Glc-2) extends outwards toward the entrance of the binding channel (Fig. 3). The disaccharide is anchored into place through a significant network of hydrogen bonds in which all sugar hydroxyl groups participate, with additional hydrogen bond interactions Structure/function of Mtr-LpqY formed through the ring oxygen of Glc-1 as well as the glycosidic oxygen atom. The buried Glc-1 molecule is orientated to interact with the side chains of Asp80, Asn134, Glu241, Trp259, Arg404, and with the backbone amide of Gly334. The second glucose molecule (Glc-2) is stabilized through direct hydrogen bond interactions with Asn25, Glu26, Gln59, and Asn134. Water interactions are also observed with both C6-hydroxyl groups, the glycosidic oxygen atom as well as an intraglucose bridging water between the C6-hydroxyl group of Glc-1 and the C2-hydroxyl group of Glc-2. Hydrophobic interactions provide additional stabilization between the indole side chain of Trp259 and Glc-1, with potential van der Waals interactions between the side chains of Trp261 and Leu371 with Glc-2. Recognition of trehalose appears to be largely driven through accommodation of this ligand within a binding pocket where substrate selectivity is underpinned through an extensive hydrogen bonding network. These interactions have a pronounced effect on substrate recognition and dictate the stringent stereoselective requirement for an α1,1-linked disaccharide. All of the residues that interact with trehalose are conserved in Mtb LpqY with the exception of residues Asn25 and Glu26 that are located on a short-loop region comprising four residues formed between β1 and α1. In Mtb, Asn25 and Glu26 are instead replaced with a threonine and an aspartic acid residue, respectively, so while the residues are slightly smaller, the properties of the side chains are maintained. Evaluating the sequence alignment of mycobacterial LpqY homologs reveals a much greater sequence divergence among these nonconserved loop residues, suggesting a degree of flexibility of substrate recognition in this region, though an acidic residue is always found at position 26 (Fig. S8). Site-directed mutagenesis of Mtr LpqY To complement our structural studies and understand the functional importance of residues coordinating trehalose within the Mtr LpqY binding pocket point mutations were introduced to substitute nine individual residues to an alanine ( Table 2). Two further residues, Asp25 and Glu26, in the short loop region that interacts with Glc-2, were mutated to threonine and aspartic acid, respectively, to replicate the Mtb LpqY binding-site. Proper folding was assessed by circular dichroism, and we determined these mutations were not detrimental to correct folding except for the Asp80Ala mutant, which was inherently less stable, and had a distinctive circular dichroism profile (Fig. S9). The corresponding aspartic acid residue (Asp70) in T. thermophilus has been implicated in enabling the closure of domains I and II upon substrate binding, which may explain the instability of this particular Mtr LpqY mutant (17). The MST was used to determine the binding affinities of each site-directed Mtr LpqY mutant protein with trehalose. Complete abrogation of binding was observed when Glu241, Trp259, and Arg404 were replaced by alanine and a significant 100-fold increase in K d observed for Asn134, highlighting that these residues are essential for substrate recognition and binding. In contrast, binding of trehalose was still observed when Asn25, Glu26, Gln59, and Leu335 were replaced by an alanine, with a corresponding 3fold reduction in K d for the Asn25 and Glu26 mutants and 10and 13-fold reduction in the K d values for Gln59 and Leu335, respectively, compared with wild-type Mtr LpqY. This indicates that while these amino acids are important for binding, mutations within these regions can be tolerated and are less critical for trehalose recognition. Examination of the sequence alignments reveals that the Asp25 and Glu26 residues are not conserved between mycobacterial homologs and that an alanine residue at position 25 naturally occurs in Mycobacterium marinum LpqY (Fig. S8). In contrast, the Mtr LpqY Asn25-Glu26 double mutant that mimics the Mtb LpqY binding site resulted in a higher binding affinity for trehalose and has the same substrate profile as Mtr LpqY (Fig. S10) indicating that these nonconserved residues have an important role in the recognition of the trehalose substrate in Mtb. Molecular dynamics simulations of Mtr LpqY To further explore the interactions between trehalose and Mtr LpqY, molecular dynamic (MD) simulations were performed over three repeats of 600 ns (Fig. 4, Movies S1 and S2). The simulations identified that trehalose has an unexpectedly short retention time in the binding pocket of 130 to 150 ns (Fig. 4B). Upon release of the sugar, Mtr LpqY undergoes a closed-to-open transition with a 131 rotation opening of the two domains, calculated from DynDom (25) (Fig. S11), typical of the "Venus-fly trap mechanism" reported for other substrate binding proteins (23). As the initial set of simulations were performed with amino acids set at their default protonation states, we analyzed whether any of the side chains had a predicted nonstandard pK a value, based on the coordinates of the crystal structure, using the PROPKA tool (26). This identified Glu256, located on β9, to be of interest as it was found to have a high pK a of 8.4 in the crystal structure, compared with an expected value of 4.5, which suggests in this conformation it could be protonated. The simulations were therefore repeated with Glu256 protonated and compared with the results of the deprotonated form. Unlike the previous simulations, trehalose remained within the Mtr LpqY binding site for the entirety of each repeat, despite Glu256 being distant from the trehalose binding site. Comparison between the sets of simulations can be seen in Figure 4, with contacts between Mtr LpqY and Structure/function of Mtr-LpqY trehalose agreeing with those observed in the X-ray structure (Figs. 3 and 4). The residues that were identified to be critical for trehalose binding, Asn134, Glu241, Trp259, and Arg404 (Table 2), maintained contact with the disaccharide for the majority of the simulation, further highlighting their importance in sugar recognition. A notable difference between the protonated and deprotonated simulations is an increased interaction with Glu241 when Glu256 is protonated (Fig. 4C). Analysis of our structure identified that Glu256 may influence the interaction of Glu241 with trehalose via a hydrogen bond bridging interaction with Asn258. Indeed, our simulation data indicate that protonation of Glu256 results in an increased contact of Asn258 with Glu241. We postulate that the increased hydrogen bonding availability of Asn258 stabilizes the interaction of LpqY with trehalose (Fig. S12). Overall, our results suggest that the contacts between Glu241 and trehalose could be significant in retaining the disaccharide until LpqY engages with the SugABC transporter. Saturation transfer difference NMR of Mtr LpqY with trehalose and 6-azido-trehalose Azide-modified trehalose analogs coupled with biorthogonal "click" labeling are useful tools to investigate trehalose uptake and metabolism in mycobacteria (13). However, despite numerous efforts, we were unable to obtain a crystal structure of Mtr LpqY in complex with 6-azido-trehalose. Therefore, to further our understanding into the mode of ligand binding, the binding epitope of 6-azido-trehalose with Mtr LpqY was determined in solution by saturation transfer difference (STD) NMR experiments, as described in the Methods section. The binding of trehalose to Mtr LpqY was also assessed by STD NMR to establish its binding epitope in solution and enable comparison with our X-ray structure. Binding was confirmed for both trehalose and 6-azido-trehalose, and the corresponding epitope maps are shown in Figure 5. For both ligands, STD NMR signals were obtained for each hydrogen atom from both glucose units (Fig. S13), indicating that both carbohydrate rings are important in binding recognition. As a result of the C2 symmetry of the trehalose disaccharide, identical binding epitopes were obtained for each glucose unit ( Fig. 5 and Fig. S13). Strong STD intensities for protons in positions 1 to 4 were observed suggesting that these are in close contact with Mtr LpqY, whereas medium intensity values were observed for protons in positions 5 and 6 ( Fig. 5). In direct contrast, a different STD NMR intensity pattern was determined for the unsymmetrized 6-azido-trehalose derivative (Fig. 5B) indicating that the azido-modified analog binds in a single orientation within the Mtr LpqY binding pocket. Notably, the 6-azido modified glucose ring displays an overall decrease in the relative STD intensities. In particular, weak STD NMR signals for protons in positions 1 and 2 are observed, which is compatible with the lower binding affinity observed for the azide-modified analog (Table 1). To probe for additional structural information in the solution state and gain information about the orientation of the ligand within the binding site and the type of amino acids contacting the hydrogen atoms of the bound ligand, we then utilized the recently developed differential epitope mapping STD NMR (DEEP-STD NMR) approach (Fig. S14) (27). This has been successfully applied to study other ABC transporters in gut bacteria (28). The DEEP-STD maps highlight differences in the orientations of ligand protons related to protein aliphatic and aromatic side chains in the binding pocket and clearly indicated that the molecular determinants of trehalose binding to Mtr LpqY correlate in both solution and solid states. In the case of 6azido-trehalose, individual DEEP-STD intensity patterns for each monosaccharide were observed indicating that protons from both glucose rings make distinct close contacts to Mtr LpqY (Fig. S14B). Specifically, the H1, H1' H2, H6a, and H6b protons are orientated toward aromatic residues, and the H3, H2', H3', and H6' are orientated toward aliphatic side chains (Fig. S14). Given the possibility that the azide-containing glucose ring could bind in either glucose subsite, we modeled the binding of 6-azido-trehalose based on the experimental NMR-derived interactions (Fig. 6). Altogether, these results indicate that the unmodified glucose ring is positioned at the base of the Mtr LpqY binding pocket, with the 6-azido-glucose ring accommodated at the second subsite located toward the channel entrance with the 6-azido-group extending into an expanded binding pocket in this region (Fig. 6B). Discussion The ongoing battle of Mtb to assimilate scarce nutrients during intracellular infection is a critical factor for the survival of this major global pathogen. Trehalose is a key component of the mycobacterial cell envelope, and "free" trehalose, released from the trehalose-containing glycolipids, is recovered by the LpqY-SugABC ABC transporter (6). Significantly, a functioning trehalose transport system is essential for Mtb to establish infection (6) and has no obvious human homolog, and for these reasons, this importer has been implicated as a target for the development of new antitubercular agents and diagnostic tools. We sought to investigate the substrate specificity and molecular basis of trehalose recognition of the mycobacterial LpqY substrate binding protein. Altogether, our results provide a number of important new insights. Significantly, our biochemical, X-ray crystallographic, MD simulation, and STD NMR data are consistent and provide the first direct evidence that Mtr LpqY is highly specific for trehalose. It is particularly noteworthy that the Mtr LpqY-SugABC transporter does not recognize alternative monosaccharides or disaccharides or known substrates of other Mtb carbohydrate importers which further underscores the notion that each Mtb carbohydrate importer has a distinct substrate preference (18,19). Further experiments are now underway to link the recognition of carbohydrates by LpqY with uptake by the LpqY-SugABC transport system. Our Mtr LpqY co-complex crystal structure in combination with STD NMR provides a unique insight into the molecular basis of trehalose recognition and substrate specificity in Mtb. Notably, trehalose specificity is manifested through a network of hydrogen bond interactions which link each hydroxy group from both glucose moieties to residues located within the LpqY binding pocket. These interactions have a pronounced Structure/function of Mtr-LpqY effect on substrate recognition and dictate the stringent stereoselective requirement for an α1,1 linked disaccharide. It is particularly interesting to highlight the inability of Mtr LpqY to bind maltose (α1-4) as this feature differs significantly from α-glycoside disaccharide transporters from Thermus sp. that bind multiple carbohydrates, including glucose (17,29). Consistent with the low sequence identity between these substrate-binding proteins (PDB 6J9W: 23%, PDB 1EU8: 27.5%), there are significant differences between the carbohydrate binding motifs of these organisms that originate from different regions of the proteins (Figs. S15 and S16). We propose that Mtb has evolved unique structural features to facilitate the specific import of the main disaccharide present in its niche host environment compared with the diversity of sugars available in geothermal habitats. As expected, given the lack of genes encoding for phosphotransferase systems in Mtb, Mtr LpqY did not recognize trehalose-6-phosphate. Modified trehalose derivatives have been developed as tools to probe trehalose processing pathways in mycobacteria; however, up until now, the structural basis for the selective recognition of these analogs was unknown (11)(12)(13). Notably, our STD NMR studies in combination with MST analyses support the uptake of the 6-azido-trehalose analog by the mycobacterial LpqY-SugABC transporter and explain the reduced affinity for this chemically modified substrate. It is particularly interesting to note that through the systematic evaluation of substrate specificity, we observe that Mtr LpqY is promiscuous for alternative trehalose derivatives modified at each position. Previous studies have shown that while 2-, 4-, and 6-azido trehalose analogs are imported and found in the cytosol, 3azido-trehalose is not (13). Interestingly, our binding studies indicate that Mtr LpqY has similar affinity for 3-and 6-azidotrehalose suggesting that while 3-azido-trehalose binds to Mtr LpqY, it is a noncognate ligand and is not transported by LpqY-SugABC. This finding may have interesting implications in the design of inhibitors of this essential ABC transporter. Our understanding of how the LpqY-SugABC transporter ensures an efficient intracellular supply of trehalose to mycobacteria is still evolving. However, our structural and MD simulations suggest an important role in the protonation state of Glu256. It is likely that the protonation of the Glu256 side chain stabilizes the interaction of trehalose in the binding pocket. This is supported by the observation that when Glu256 is protonated, trehalose remains within the Mtr LpqY binding pocket for the entire simulation. Analysis of the contacts suggests that a significant contribution to sugar recognition is from Glu241, which is mediated through Asn258. The observation that the protonation state of a side chain within a substrate binding protein influences the stability of substrate recognition raises new questions into the mechanistic basis of transport of ABC transporters. In conclusion, one of the major hurdles in TB drug development is that molecules need to penetrate the mycobacterial cell envelope to gain intracellular access and kill Mtb. However, the complex Mtb cell envelope poses a significant impermeable barrier, which prevents drugs and diagnostic tools from accessing the cytoplasm. The opportunities of targeting the vulnerable Mtb LpqY-SugABC transporter are twofold. First, the extracellular location of the LpqY substrate binding lipoprotein component provides a route to develop TB drugs that can kill Mtb without needing to cross the impermeable cell envelope. Second, it offers the opportunity to hijack this import system to deliver potent inhibitor substrate mimics into the mycobacterial cell. The results from this work represent a significant step in this direction and provide a robust framework to ultimately exploit this transporter in the rational development of new antitubercular agents and diagnostic tools. Experimental procedures All chemicals and reagents were purchased from Sigma-Aldrich or Carbosynth, unless specified. PCR and restriction enzymes were obtained from New England Biolabs. Doubledistilled water was used throughout. Plasmid construction M. thermoresistible (NCTC 10409) was obtained from the Public Health England National Collection of Type cultures, and gDNA was isolated using established protocols (30). The lpqY gene was amplified from Mtr genomic DNA by PCR using gene-specific primers (Table S2) based on the annotated sequence retrieved from the NCBI database (GenBank LT906483). It is possible that the start codon for the Mtr lpqY gene starts further upstream than that annotated in the NCBI database, in which case the Mtr LpqY protein is a truncated version. The PCR amplification (Q5 polymerase [NEB]) consisted of 30 cycles (95 C, 2 min; 95 C, 1 min; 60 C, 30 s; 72 C, 3 min), followed by an extension cycle (10 min at 72 C). The resulting PCR product was cloned into a modified pET-SUMO vector (a gift from Dr Patrick Moynihan, University of Birmingham) using the BamHI and HindIII restriction enzyme sites resulting in the construct mtr_lpqY_sumo. Targeted single-site substitutions were introduced into mtr_lpqY_sumo using the primers that are detailed in Table S2, with Phusion HF polymerase, and the PCR cycle (98 C, 30 s; 20 cycles of 98 C, 30 s; 60 C, 30 s; 72 C, 4 min; followed by 5 min at 72 C), followed by digestion with 1 μl DpnI. All plasmid sequences were verified by DNA sequencing (GATC) and used for protein expression. Recombinant overexpression of Mtr LpqY E. coli BL21 (DE3) competent cells were transformed with the appropriate mtr_lpqY_sumo expression plasmid and grown at 27 C to an optical density at 600 nm (OD 600 ) of 0.6 to 0.8 in terrific broth medium supplemented with 50 μg/ml kanamycin. Protein production was induced with 1 mM isopropylβ-thiogalactopyranoside, and the cultures were grown at 16 C overnight with shaking (180 rpm). The cells were harvested (4000 g, 30 min, 4 C) and resuspended in lysis buffer (20 mM Tris, 300 mM NaCl, 10% glycerol, pH 7.5 [buffer A]) supplemented with 0.1% Triton X-100 and frozen at −80 C until further use. Protein purification A complete protease inhibitor tablet (Roche), 5 mM MgCl 2 , 2 mg of DNase, and 20 mg of lysozyme were added to the resuspended pellet, and the pellet was sonicated on ice (Sonicator Ultrasonic Liquid Processor XL; Misonix). Following centrifugation (39,000g, 30 min, 4 C), the supernatant was filtered (0.45 μm filter) and loaded onto a pre-equilibrated HisPur Ni 2 -NTA affinity resin (Thermo Scientific). The column was washed with buffer A (5 column volumes), and the recombinant Mtr LpqY protein was eluted from the Ni 2+ resin with increasing concentrations of imidazole. Fractions containing the Mtr LpqY protein were digested with His-tagged SUMO protease (1 h, 30 C, 300 μg) and dialyzed at 4 C for 12 h against buffer A. A second HisPur Ni 2 -NTA affinity resin purification step was undertaken, and the fractions containing Mtr LpqY protein were pooled and purified further using size exclusion chromatography (Superdex 200 16/600 column, GE Healthcare) with buffer A. Fractions containing Mtr LpqY were combined, and a final HisPur Ni 2 -NTA affinity resin purification step was undertaken with buffer A. The flowthrough fractions containing purified Mtr LpqY were pooled, and the protein was concentrated to 5 to 14 mg/ml (Vivaspin 20; GE Healthcare) and stored at −80 C. The identity of the proteins were confirmed by tryptic digest and nanoLCelectrospray ionization-MS/MS (WPH Proteomics Facility, University of Warwick). Circular dichroism analysis Purified Mtr LpqY proteins were diluted to 0.2 mg/ml and dialyzed in the following buffer: 20 mM Tris, 20 mM NaCl, pH 7.5. The samples were transferred into a 1 mm path length quartz cuvette and analyzed on Jasco J-1500 DC spectrometer from 198 to 260 nm. Spectra were acquired in triplicate and averaged after subtraction of the buffer background. Crystallization and structure determination For co-crystallization experiments, Mtr LpqY was buffer exchanged into 20 mM HEPES, 20 mM NaCl pH 7.5 and incubated with 10 mM trehalose at room temperature for 10 min. Successful crystallization required the removal of unbound trehalose through a series of concentration and dilution washsteps (Vivaspin 20; GE Healthcare) before crystallization. Crystals of Mtr LpqY in complex with trehalose were grown by vapor diffusion in 96-well plates (Swiss-Ci) using a Mosquito liquid handling system (TTP LabTech) by mixing 1:1 volumes (100 nl) of concentrated LpqY (14 mg/ml) with reservoir solution. Mtr LpqY crystals typically grew within 3 to 7 days at 22 C in 0.1 M HEPES pH 6.0, 50% w/v polypropylene glycol 400, 5% DMSO, and 1 mM TCEP. The Mtr LpqY crystals were either directly flash-frozen in liquid nitrogen before data collection or soaked in 1 M NaI prepared in the same crystallization buffer for 5 min before freezing. The X-ray diffraction data for the ligand bound Mtr LpqY crystals and iodide derivatives were collected at the I03 beamline of Diamond Light Source. All diffraction data were indexed, integrated, and scaled with XDS (http://xds.mpimfheidelberg.mpg.de/) (31) through the XIA2 pipeline and the CCP4 suite of programs (32). Initial phases were determined based on an iodide derivative through the Big_ep phasing pipeline (33). An initial model of Mtr LpqY was generated using Autobuild (34). This structural model was used to determine a molecular replacement solution (Phaser (35)) for a native Mtr LpqY data set, and refinement was carried out in phenix-refine (36) and manual rebuilding in COOT (37). The find ligand function in COOT was used to fit the trehalose ligand into unoccupied electron density in both chains of the asymmetric unit. The restraints for use in refinement were calculated using REEL (38). The model of the ligand-bound structure comprises residues 14 to 448 in both chains (A-B). No Ramachandran outliers Structure/function of Mtr-LpqY were identified, and structure validations were done by Mol-Probity (39). Figures were prepared using Pymol (The PyMOL Molecular Graphics System, Version 2.0 Schrödinger, LLC), except for those showing electron density which were prepared using CCP4mg (40). H STD NMR experiments All the STD NMR experiments were performed in PBS D 2 O buffer, pH 7.4. For the LpqY/trehalose complex, the protein concentration was 25 μM, whereas the ligand concentration (trehalose or 6'-azido-6'-deoxy-trehalose) was 1 mM. STD NMR spectra were acquired on a Bruker Avance 500.13 MHz at 288 K. The on-and off-resonance spectra were acquired using a train of 50 ms Gaussian selective saturation pulses using a variable saturation time from 0.5 s to 5 s, and a relaxation delay (D1) of 4 s. The water signal was suppressed using the watergate technique (41), whereas the residual protein resonances were filtered using a T 1ρ -filter of 40 ms. All the spectra were acquired with a spectral width of 8 kHz and 24K data points using 512 scans. The on-resonance spectra were acquired by saturating at 0.80 (aliphatic hydrogens) or 7.20 ppm (aromatic hydrogens), as average chemical shifts predicted from shiftX2 (42) for the aliphatic and aromatic residues present in the binding site of Mtr LpqY, whereas the off-resonance spectra were acquired by saturating at 40 ppm. To get accurate structural information from the STD NMR data and to minimize the T 1 relaxation bias, the STD build up curves were fitted to the equation STD(t sat ) = STD max *(1-exp(-k sat *t sat )), calculating the initial growth rate STD 0 factor as STD max *k sat = STD 0 and then normalizing all of them to the highest value (43). DEEP-STD factors were obtained as previously described (27) after a saturation time of 1 s on aliphatic or aromatic regions (0.80 or 7.20 ppm, respectively). Docking calculations Schrodinger's Maestro 2019 to 1 suite was used to dock both disaccharides into Mtr LpqY, employing the crystal structure of Mtr LpqY in complex with trehalose. First, the water molecules and ions were removed using the Protein Preparation Wizard tool, and the protonation state for each residue was calculated with Epik at pH 7.5. Both ligands (trehalose and 6'-azido-6'-deoxy-trehalose) were prepared using Ligprep. Before the docking calculation, a receptor grid was generated with Glide setting a square box centered on the trehalose in the crystal structure (then removed) of 20 Å side. Trehalose and 6'-azido-6'-deoxy-trehalose were then docked with Glide with extra precision, and a postdock minimization was performed. Data were processed, and figures prepared with the Maestro suite. Thermal shift assay The transition unfolding temperature T m of the Mtr LpqY protein (2.6 μM) was determined in the presence or the absence of ligands. The screen used a final ligand concentration of 10 mM. Reactions were performed in a total volume of 20 μl using Rotor-Gene Q Detection System (Qiagen), setting the excitation wavelength to 470 nm and detecting emission at 555 nm of the SYPRO Orange protein gel stain, 31 × final concentration (Invitrogen, 5000X concentrate stock). The cycle used was a melt ramp from 30 to 95 C, increasing temperature in 1 C steps and time intervals of 5 s. Fluorescence intensity was plotted as a function of temperature. The T m was determined using the Rotor-Gene Q software and the Analysis Melt functionality. All experiments were performed in triplicate. Isothermal titration calorimetry ITC experiments were performed using the PEAQ-ITC system (Malvern Panalytical Ltd) at 25 C. Mtr LpqY was dialyzed extensively into 50 mM HEPES, 300 mM NaCl, pH 7.5, and the trehalose and galactotrehalose ligands was dissolved in this dialysis buffer. The syringe was loaded with the ligand (500 μM), and the calorimetric cell was loaded with Mtr LpqY (53.6 μM). Following a 60 s initial equilibration, an initial injection of 0.4 μl was performed followed by 19 injections of 2.0 μl every 120 s with a speed of injection of 0.5 μl/s. The data were analyzed using the "one set of sites" model within the MicroCal PEAQ-ITC software (Malvern) iterated using the Lavenberg-Marquardt algorithm after subtraction of the control experiment (trehalose titrated into buffer). The thermodynamic and binding parameters were derived from the nonlinear least squares fit to the binding isotherm. Microscale thermophoresis The Mtr LpqY protein was labeled using the amine reactive RED-NHS dye (3 μM) (second generation, NanoTemper Technologies) and a constant concentration of Mtr LpqY (2.6 μM). Excess dye was removed by size exclusion chromatography (Superdex 200 10/300 column [GE Healthcare] using 50 mM HEPES, 300 mM NaCl, pH 7.5). The compounds were prepared in PBS containing 0.05% Tween 20, and the final concentration of the protein in the assay was 500 nM. The samples were loaded into the MonoLith NT.115 standard treated capillaries and incubated for 10 min before analysis using the Monolith NT.115 instrument (NanoTemper Technologies) at 21 C using the auto-select excitation power (20%) and medium laser power. The binding affinities were calculated using a single-site binding model using the MST NT Analysis software (version 7.0). All experiments were carried out in triplicate. Atomistic simulations All simulations were run using GROMACS 2019 (44). Simulations of the LpqY X-ray structure were performed without position restraints for a total of 600 ns and run in triplicate. In all cases, a 2 fs timestep was used, in an NPT ensemble with Vrescale temperature coupling at 310 K (45) and a semi-isotropic Parrinello-Rahman barostat at 1 bar, with protein/trehalose and water/ions coupled individually (46). Electrostatics were described using the Particle Mesh Ewald method, with a cut-off of 1.2 nm, and the van der Waals interactions were shifted between 1 and 1.2 nm. The tip3p water model was used. The water bond angles and distances were constrained by SETTLE (47). Hydrogen covalent bonds were constrained using the LINCS algorithm (48). Analysis was performed using MDAnalysis (49) and visualized in PyMOL. Protonation state calculations were performed using PROPKA3 (26). Synthesis A full description of all methods for the synthesis and characterization of all compounds are provided in the supporting information. Data availability The structure presented in this article has been deposited in the Protein Data Bank (PDB) with the following code: 7APE. All remaining data are contained within the article.
2021-01-23T06:16:24.032Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "a311b26314a620be12f46e9737fe807b20e6a17e", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/article/S0021925821000764/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8038ad97fc58a787c52b0c79dafdf4c94b53a363", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
257711191
pes2o/s2orc
v3-fos-license
Editorial: The shape of lives to come COPYRIGHT © 2023 Maiese, Gare, Kiverstein, Krueger and Hanna. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. Editorial: The shape of lives to come The five articles under this Research Topic constructively and critically discuss various dimensions and implications of The Theory of Thought-Shapers (TTS) (Hanna and Paans, 2021), thereby collectively making a strong case for its cogency and truth. TTS says that thought-shapers-i.e., mental frames, especially including metaphors, analogies, images, schemata, stereotypes, symbols, and templates-partially causally determine, form, and normatively guide (i.e., shape) our essentially embodied human minds-&-lives. Such shaping can occur more negatively, by means of mechanical, constrictive thought-shapers, or more positively, by organic, generative thought-shapers. TTS, which is empirically testable, is embedded within (a) a fundamental metaphysics of the mind-body relation and mental causation, the essential embodiment theory (Hanna and Maiese, 2009), and (b) a general theory of how social institutions shape people's lives for worse or for better, the mind-body politic (Maiese and Hanna, 2019). Against that theoretical backdrop, these five articles begin to reveal both the potential dangers of thought-shaping, and how thought shaping might be implemented in a more constructive way. Three of the articles in this Research Topic present contemporary case studies that reveal the centrality of affective thought-shaping. The other two examine how TTS can be extended into novel domains. The first three case-studies outlined below deal, respectively, with (i) the COVID-19 pandemic insofar as it has applied to jobs, family support, and family life, (ii) the everyday lives of farmers in an urban wetland near Mexico City, Xochimilco, and (iii) parental or guardian hoping as it applies to families or other communities. All three studies examine how thought-shaping operates on people's desires, emotions, and feelings, and thereby influences their intentional actions. In "COVID-19 in the United States as affective Frame, " Protevi deploys the notion of an "affective frame"-i.e., the essentially embodied emotional scaffolding of a social situation-in order to work out a case study of a double-binding negative affective frame. This double-bind arises due a tension between, on the one hand, having a job as a care-giver who risks catching the disease, and on the other hand, being a household provider of income who needs to perform another kind of care-giving labor at home, while also risking passing on the disease to their family. Protevi's discussion highlights a collective of harmful social practices that impact workers who simultaneously strive to satisfy the demands associated with their roles as both breadwinners and caring professionals. In "Sense of agency, affectivity and social-ecological degradation: An enactive and phenomenological approach, " Siqueiros-García et al. examine the impact of Maiese et al. . /fpsyg. . environmental change and degradation on people's affective lives. Deploying the methods of phenomenology, enactivism, and ecological psychology, they argue that the loss of a traditional form of life in Xochimilco flows from the degradation of socioecological systems, which limits subjects' opportunities to relate to other people and the natural environment in a meaningful way. This loss of meaningful interactions with the environment, in turn, generates a feeling of loss of control, which affectively manifests among the farmers as anxiety, frustration, and negative stress. The destructive nature of this affective shaping thereby cripples their sense of agency. And in "'Bringing new life in': Hope as a know-how of not knowing, " Cuffari et al. present another case study, this time the practice of guardian or parental hoping, which they describe as a form of know-how that shapes subjects' cognitive and affective processes. In their view, hope is not an individual emotion, but rather consists in a shared form of social activity. In particular, it involves linguistic activity and the navigation of uncertainty via the reframing of utterances. These authors also identify particular impediments to and facilitators of hope that function as restrictive or generative thought-shapers, respectively. The two novel domains into which TTS is extended are (i) neuro-immunology and (ii) the larger context of human representational activities, including but not necessarily restricted to language. In "Action-shapers and their neuro-immunological foundations, " Paans and Ehlen extend the theory of thought-shapers (TTS) to what they call action shaping, by presenting a theory of how neuro-immunological processes affect our intentional abilities and our capacity to act. Two of the examples that they discuss are chronic stress and high levels of sugar intake, both of which shape people's capacity to form intentions and execute actions. And in "Thought-shapers embedded, " Kondor proposes that thoughtshapers are built on a more basic and richer set of structures that consists of representational skills, tools, and social institutions, thereby embedding thought-shapers. Taken together, these five articles testify to the theoretical power of TTS and its potential application to a wide range of different topics. Our hope is that they initiate new conversations about the complex interplay between social institutions and human minds-&-lives. Author contributions RH wrote the initial draft of this editorial, which was then edited and revised in light of comments raised by MM, AG, JKi, and JKr. All authors contributed to the article and approved the submitted version.
2023-03-24T15:18:06.597Z
2023-03-22T00:00:00.000
{ "year": 2023, "sha1": "9875ff0bec766fbabf99afe1a937a32755f5404c", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2023.1154577/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "f4c4a6febcdaa4d48d21d22dfc165546096e0c23", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
230799448
pes2o/s2orc
v3-fos-license
Hilbert's 17th problem in free skew fields This paper solves the rational noncommutative analog of Hilbert's 17th problem: if a noncommutative rational function is positive semidefinite on all tuples of hermitian matrices in its domain, then it is a sum of hermitian squares of noncommutative rational functions. This result is a generalization and culmination of earlier positivity certificates for noncommutative polynomials or rational functions without hermitian singularities. More generally, a rational Positivstellensatz for free spectrahedra is given: a noncommutative rational function is positive semidefinite or undefined at every matricial solution of a linear matrix inequality $L\succeq0$ if and only if it belongs to the rational quadratic module generated by $L$. The essential intermediate step towards this Positivstellensatz for functions with singularities is an extension theorem for invertible evaluations of linear matrix pencils. Introduction In his famous problem list of 1900, Hilbert asked whether every positive rational function can be written as a sum of squares of rational functions. The affirmative answer by Artin in 1927 laid ground for the rise of real algebraic geometry [BCR98]. Several other sum-of-squares certificates (Positivstellensätze) for positivity on semialgebraic sets followed; since the detection of sums of squares became viable with the emergence of semidefinite programming [WSV12], these certificates play a fundamental role in polynomial optimization [Las01,BPT13]. Positivstellensätze are also essential in the study of polynomial and rational inequalities in matrix variables, which splits into two directions. The first one deals with inequalities where the size of the matrix arguments is fixed [PS76,KŠV18]. The second direction attempts to answer questions about positivity of noncommutative polynomials and rational functions when matrix arguments of all finite sizes are considered. Such questions naturally arise in control systems [dOHMP09], operator algebras [Oza16] and quantum information theory [DLTW08, P-KRR+19]. This (dimension-)free real algebraic geometry started with the seminal work of Helton [Hel02] and McCullough [McC01], who proved that a noncommutative polynomial is positive semidefinite on all tuples of hermitian matrices precisely when it is a sum of hermitian squares of noncommutative polynomials. The purpose of this paper is to extend this result to noncommutative rational functions. Let x " px 1 , . . . , x d q be freely noncommuting variables. The free algebra Căxą of noncommutative polynomials admits a universal skew field of fractions C p ăx q ą, also called the free skew field [Coh95,CR99], whose elements are noncommutative rational functions. We endow C p ăx q ą with the unique involution˚that fixes the variables and conjugates the scalars. One can consider positivity of noncommutative rational functions on tuples of hermitian matrices. For example, let x 3 x 1`x4 x 2 qpx 2 1`x 2 2 q´1px 1 x 3`x2 x 4 q P C p ăx q ą. The solution of Hilbert's 17th problem in the free skew field presented in this paper (Corollary 5.4) states that every Ö P C p ăx q ą, positive semidefinite on its hermitian domain, is a sum of hermitian squares in C p ăx q ą. This statement was proved in [KPV17] for noncommutative rational functions Ö that are regular, meaning that ÖpXq is well-defined for every tuple of hermitian matrices. As with most noncommutative Positivstellensätze, at the heart of this result is a variation of the Gelfand-Naimark-Segal (GNS) construction. Namely, if Ö P C p ăx q ą is not a sum of hermitian squares, one can construct a tuple of finite-dimensional hermitian operators Y that is a sensible candidate for witnessing nonpositive-definiteness of Ö. However, the construction itself does not guarantee that Y actually belongs to the domain of Ö. This is not a problem if one assumes that Ö is regular, as it was done in [KPV17]. However, it is worth mentioning that deciding regularity of a noncommutative rational function is a challenge on its own, as observed in [KPV17]. In this paper, the domain issue is resolved with an extension result: the tuple Y obtained from the GNS construction can be extended to a tuple of finite-dimensional hermitian operators in the domain of Ö without losing the desired features of Y . The first main theorem of this paper pertains to linear matrix pencils and is key for the extension mentioned above. It might also be of independent interest in the study of quiver representations and semi-invariants [Kin94,DM17]. Let b denote the Kronecker product of matrices. ave full rank, then there exists Z P M n pCq d for some n ě m such that See Theorem 3.3 for the proof. Together with a truncated rational imitation of the GNS construction, Theorem A leads to a rational Positivstellensatz on free spectrahedra. Given a monic hermitian pencil L " I`H 1 x 1`¨¨¨`Hd x d , the associated free spectrahedron DpLq is the set of hermitian tuples X satisfying the linear matrix inequality LpXq ľ 0. Since every convex solution set of a noncommutative polynomial is a free spectrahedron [HM12], the following statement is called a rational convex Positivstellensatz, and it generalizes its analogs in the polynomial context [HKM12] and regular rational context [Pas18]. Theorem B. Let L be a hermitian monic pencil and Ö P C p ăx q ą. Then Ö ľ 0 on DpLq X dom Ö if and only if Ö belongs to the rational quadratic module generated by L: A more precise quantitative version is given in Theorem 5.2 and has several consequences. The solution of Hilbert's 17th problem in C p ăx q ą is obtained by taking L " 1 in Corollary 5.4. Versions of Theorem B for invariant (Corollary 5.7) and real (Corollary 5.8) noncommutative rational functions are also given. Furthermore, it is shown that the rational Positivstellensatz also holds for a family of quadratic polynomials describing non-convex sets (Subsection 5.4). As a contribution to optimization, Theorem B implies that the eigenvalue optimum of a noncommutative rational function on a free spectrahedron can be obtained by solving a single semidefinite program (Subsection 5.5), much like in the noncommutative polynomial case [BPT13,BKP16] (but not in the classical commutative setting). Finally, Section 6 contains complementary results about domains of noncommutative rational functions. It is shown that every Ö P C p ăx q ą can be represented by a formal rational expression that is well-defined at every hermitian tuple in the domain of Ö (Proposition 2.1); this statement fails in general if arbitrary matrix tuples are considered. On the other hand, a Nullstellensatz for cancellation of non-hermitian singularities is given in Proposition 6.3. Acknowledgment. The author thanks Igor Klep for valuable comments and suggestions which improved the presentation of this paper. Preliminaries In this section we establish terminology, notation and preliminary results on noncommutative rational functions that are used throughout the paper. Let M mˆn pCq denote the space of complex mˆn matrices, and M n pCq " M nˆn pCq. Let H n pCq denote the real space of hermitian nˆn matrices. For X " pX 1 , . . . , X d q P M mˆn pCq d , A P M pˆm pCq and B P M nˆq pCq we write 2.1. Free skew field. We define noncommutative rational functions using formal rational expressions and their matrix evaluations as in . Formal rational expressions are syntactically valid combinations of scalars, freely noncommuting variables x " px 1 , . . . , x d q, rational operations and parentheses. More precisely, a formal rational expression is an ordered (from left to right) rooted tree whose leaves have labels from C Y tx 1 , . . . , x d u, and every other node is either labeled`orˆand has two children, or is labeled´1 and has one child. For example, pp2`x 1 q´1x 2 qx´1 1 is a formal rational expression corresponding to the ordered tree A subexpression of a formal rational expression r is any formal rational expression which appears in the construction of r (i.e., as a sub-tree). For example, all subexpressions of pp2`x 1 q´1x 2 qx´1 1 are Given a formal rational expression r and X P M n pCq d , the evaluation rpXq is defined in the natural way if all inverses appearing in r exist at X. The set of all X P M n pCq d such that r is defined at X is denoted dom n r. The (matricial) domain of r is dom r " ď nPN dom n r. Note that dom n r is a Zariski open set in M n pCq d for every n P N. A formal rational expression r is non-degenerate if dom r ‰ H; let R C pxq denote the set of all nondegenerate formal rational expressions. On R C pxq we define an equivalence relation r 1 " r 2 if and only if r 1 pXq " r 2 pXq for all X P dom r 1 Xdom r 2 . Equivalence classes with respect to this relation are called noncommutative rational functions. By [K-VV12, Proposition 2.2] they form a skew field denoted C p ăx q ą, which is the universal skew field of fractions of the free algebra Căxą by [Coh95, Section 4.5]. The equivalence class of r P R C pxq is denoted Ö P C p ăx q ą; we also write r P Ö and say that r is a representative of the noncommutative rational function Ö. There is a unique involution˚on C p ăx q ą that is determined by α˚" α for α P C and xj " x j for j " 1, . . . , d. Furthermore, this involution lifts to an involutive map˚on the set R C pxq: in terms of ordered trees,˚transposes a tree from left to right and conjugates the scalar labels. Note that X P dom r implies X˚P dom r˚for r P R C pxq. 2.2. Hermitian domain. For r P R C pxq let hdom n r " dom n r X H n pCq d . Then hdom r " ď nPN hdom n r is the hermitian domain of r. Note that hdom n r is Zariski dense in dom n r because H n pCq is Zariski dense in M n pCq and dom n r is Zariski open in M n pCq d . Finally, we define the (hermitian) domain of a noncommutative rational function: for Ö P C p ăx q ą let By the definition of the equivalence relation on non-degenerate expressions, Ö has a well-defined evaluation at X P dom Ö, written as ÖpXq, which equals rpXq for any representative r of Ö that has X in its domain. The following proposition is a generalization of [KPV17, Proposition 3.3] and is proved in Subsection 6.1. (ii) if Mpaq P GL e pAq and A " M n pCq for some n P N, then r can be evaluated at a. We say that the triple pu, M, vq is a linear representation of r of size e. Usually, linear representations are defined for noncommutative rational functions and with less emphasis on domains; however, the definition above is more convenient for the purpose of this paper. Remark 2.3. In the definition of a linear representation, (ii) is valid not only for M n pCq but more broadly for stably finite algebras [HMS18, Lemma 5.2]. However, it may fail in general, e.g. for the algebra of all bounded operators on an infinite-dimensional Hilbert space. We will also require the following proposition on pencils that is a combination of various existing results. (i) M P GL e pC p ăx q ąq; (ii) there are n P N and X P M n pCq d such that det MpXq ‰ 0; Remark 2.5. If r P R C pxq admits a linear representation of size e, then hdom n r ‰ H for n ě e´1 by Proposition 2.4 and the Zariski denseness of hdom n r in dom n r. An extension theorem An affine matrix pencil M of size e is irreducible if UMV " 0 for nonzero matrices U P M e 1ˆepCq and V P M eˆe 2 pCq implies rk U`rk V ď e´1. In other words, a pencil is not irreducible if it can be put into a 2ˆ2 block upper triangular form with square diagonal blocks p ‹ ‹ 0 ‹ q by a left and a right basis change. Every irreducible pencil is full. On the other hand, every full pencil is, up to a left and a right basis change, equal to a block upper-triangular pencil whose diagonal blocks are irreducible pencils. In terms of quiver representations [Kin94], For the purpose of this section we extend evaluations of linear matrix pencils to tuples of rectangular matrices. If Λ " ř d j"1 Λ j x j is of size e and X P M ℓˆm pCq d then The following lemma and proposition rely on an ampliation trick in a free algebra to demonstrate the existence of specific invertible evaluations of full pencils (see [HKV20, Section 2.1] for another argument involving such ampliations). Lemma 3.1. Let Λ " ř d j"1 Λ j x j be a homogeneous irreducible pencil of size e. Let ℓ ď m and denote n " pm´ℓqpe´1q. Given C P M meˆℓe pCq, consider the pencil r Λ of size pm`nqe in dpm`nqpn`m´ℓq variables z jpq , here p E p,q P M mˆpn`m´ℓq pCq and q E p´m,q P M nˆpn`m´ℓq pCq are the standard matrix units. If C has full rank, then the pencil r Λ is full. Proof. Suppose U and V are constant matrices with epm`nq columns and epm`nq rows, respectively, that satisfy U r where each U p has e columns, V 0 has ℓe rows, and each V q with q ą 0 has e rows. Also let U 0 " pU 1¨¨¨Um q. Then U r ΛV " 0 implies " pm`nqe by the choice of n. Therefore r Λ is full. Proposition 3.2. Let Λ be a homogeneous full pencil of size e, and let X P M mˆℓ pCq d with ℓ ď m be such that ΛpXq has full rank. Then there exist p X P M mˆpn`m´ℓq pCq d and q X P M nˆpn`m´ℓq pCq d for some n P N such that Proof. A full pencil is up to a left-right basis change equal to a block upper-triangular pencil with irreducible diagonal blocks. Suppose that the lemma holds for irreducible pencils; since the set of pairs p p X, q Xq P M mˆpn`m´ℓq pCq dˆM nˆpn`m´ℓq pCq d satisfying (3.3) is Zariski open, the lemma then also holds for full pencils. Thus we can without loss of generality assume that Λ is irreducible. Let n 1 " pm´ℓqpe´1q and e 1 " pm`n 1 qe. By Lemma 3.1 applied to C " ř d j"1 X j b Λ j and Proposition 2.4, there exists Z P M e 1´1 pCq dpm`n 1 qpn 1`m´ℓ q such that r ΛpZq is invertible. Therefore the matrix s invertible since it is similar to r ΛpZq (via a permutation matrix). Thus there are p X P M mˆpn`m´ℓq pCq d and q X P M nˆpn`m´ℓq pCq d such that We are ready to prove the first main result of the paper. Theorem 3.3. Let Λ be a full pencil of size e, and let ave full rank. Then there are n ě m and Z P M n pCq d such that det Λ¨Y Proof. By Proposition 3.2 and its transpose analog there exist k P N and Thus the matrix (3.5) is invertible; its block structure and the linearity of Λ imply that (3.5) is invertible for every ε ‰ 0, so we can choose ε " 1. After performing elementary row and column operations on (3.5) we conclude that is invertible. So the lemma holds for n " 2pm`kq. Remark 3.4. It follows from the proofs of Proposition 3.2 and Theorem 3.3 that one can choose n " 2`e 3 m 2`e mp2eℓ´1q`ℓpeℓ´2qȋ n Theorem 3.3. However, this is unlikely to be the minimal choice for n. Let M 8 pCq be the algebra of NˆN matrices over C that have only finitely many nonzero entries in each column; that is, elements of M 8 pCq can be viewed as operators on ' N C. Given r P R C pxq let dom 8 r be the set of tuples X P M 8 pCq d such that rpXq is well-defined. If pu, M, vq is a linear representation of r of size e, then MpXq P M e pM 8 pCqq is invertible for every X P dom 8 r by the definition of a linear representation adopted in this paper. Proof. Let pu, M, vq be a linear representation of r of size e. By assumption, ave full rank. Let n P N be as in Theorem 3.3. Then there is Z 1 P M n pCq 1`d such that (3.7) det¨M 0 b¨I The set of all Z 1 P M n pCq 1`d satisfying (3.7) is thus a nonempty Zariski open set in M n pCq 1`d . Since the set of positive definite nˆn matrices is Zariski dense in M n pCq, there exists Z 1 P H n pCq 1`d with Z 1 0 ą 0 such that (3.7) holds. If by the definition of a linear representation. We also record a non-hermitian version of Proposition 3.5. Proposition 3.6. Let r P R C pxq. If X P M mˆℓ pCq d with ℓ ď m is such that Proof. We apply a similar reasoning as in the proof of Proposition 3.5, only this time with Proposition 3.2 instead of Theorem 3.3, and without hermitian considerations. Multiplication operators attached to a formal rational expression In this section we assign a tuple of operators X on a vector space of countable dimension to each formal rational expression r, so that r is well-defined at X and the finite-dimensional restrictions of X partially retain a certain multiplicative property. Fix an expression r P R C pxq. Without loss of generality we assume that all the variables in x appear as subexpressions in r (otherwise we replace x by a suitable subtuple). Let R " t1u Y tq P R C pxqzC : q is a subexpression of r or r˚u Ă R C pxq. Note that R is finite, hdom q Ě hdom r for q P R, and q P R implies q˚P R. Let R Ă C p ăx q ą be the set of noncommutative rational functions represented by R. For ℓ P N we define finite-dimensional vector subspaces Note that V ℓ Ď V ℓ`1 since 1 P R. Furthermore, let V " Ť ℓPN V ℓ . Then V is a finitely generated˚-subalgebra of C p ăx q ą. For j " 1, . . . , d we define operators Lemma 4.1. There is a linear functional φ : V Ñ C such that φp×˚q " φp×q and φp××˚q ą 0 for all × P V zt0u. Proof. For some X P hdom r let m " max qPR }qpXq}. Let ℓ P N. Since V ℓ is finitedimensional, there exist n ℓ P N and X pℓq P hdom n ℓ r such that Since V is a C-algebra generated by R, routine estimates show that φ is well-defined. It is also clear that φ has the desired properties. For the rest of the paper fix a functional φ as in Lemma 4.1. Then is an inner product on V . With respect to this inner product we can inductively build an ordered orthogonal basis B of V with the property that B X V ℓ is a basis of V ℓ for every ℓ P N. Lemma 4.2. With respect to the inner product (4.2) and the ordered basis B as above, operators X 1 , . . . , X d are represented by hermitian matrices in M 8 pCq, and X P dom 8 r. Proof. Since`X j × 1 , × 2˘" φ`×2x j × 1˘"`×1 , X j × 2f or all × 1 , × 2 P V and X j pV ℓ q Ď V ℓ`1 for all ℓ P N, it follows that the matrix representation of X j with respect to B is hermitian and has only finitely many nonzero entries in each column and row. The rest follows inductively on the construction of r since X j are the left multiplication operators on V . Note that τ ps˚q " τ psq for all s P R C pxq. Proposition 4.3. Let the notation be as above, and let U be a finite-dimensional Hilbert space containing V ℓ`1 . If X is a d-tuple of hermitian operators on U such that X P hdom r and X j | V ℓ " X j | V ℓ for j " 1, . . . , d, then X P hdom q and (4.3) qpXq× " Õ× for every q P R and s P ℓ hkkikkj R¨¨¨R satisfying 2τ pqq`τ psq ď ℓ`2. Positive noncommutative rational functions In this section we prove various positivity statements for noncommutative rational functions. Let L be a hermitian monic pencil of size e; that is, L " I`H 1 x 1`¨¨¨`Hd x d with H j P H e pCq. Then DpLq " ď nPN D n pLq, where D n pLq " tX P H n pCq d : LpXq ľ 0u, is a free spectrahedron. The main result of the paper is Theorem 5.2, which describes noncommutative rational functions that are positive semidefinite or undefined at each tuple in a given free spectrahedron DpLq. In particular, Theorem 5.2 generalizes [Pas18, Theorem 3.1] to noncommutative rational functions with singularities in DpLq. Rational convex Positivstellensatz. Let L be a hermitian monic pencil of size e. To r P R C pxq we assign the finite set R, vector spaces V ℓ and operators X j as in Section 4. For ℓ P N we also define Then S ℓ is a real vector space and Q ℓ is a convex cone. The proof of the following proposition is a rational modification of a common argument in free real algebraic geometry (cf. [ HKM12, Proposition 3.1] and [KPV17, Proposition 4.1]). A convex cone is salient if it does not contain a line. Proposition 5.1. The cone Q ℓ is salient and closed in S 2ℓ`1 with the Euclidean topology. Proof. As in the proof of Lemma 4.1 there exists X P hdom r such that ×pXq ‰ 0 for all × P V 2ℓ`1 zt0u. Furthermore, we can choose X close enough to 0 so that LpXq ľ 1 2 I. Then clearly ×pXq ľ 0 for every × P Q ℓ , so Q ℓ X´Q ℓ " t0u and thus Q ℓ is salient. Note that }×} ‚ " }×pXq} is a norm on V 2ℓ`1 . Also, finite-dimensionality of S 2ℓ`1 implies that every element of Q ℓ can be written as a sum of N " 1`dim S 2ℓ`1 elements of the form ×˚× and Ú˚LÚ for × P V ℓ , Ú P V e ℓ by Carathéodory's theorem [Bar02, Theorem I.2.3]. Assume that a sequence tÖ n u n Ă Q ℓ converges to × P S 2ℓ`1 . After restricting to a subsequence we can assume that there is Ún ,j LÚ n,j for all n P N. The definition of the norm }¨} ‚ implies }× n i } 2 ‚ ď }Ö n } ‚ and max 1ďiďe }pÚ n j q i } 2 ‚ ď 2}Ö n } ‚ . In particular, the sequences t× n,i u n Ă V ℓ for 1 ď i ď M and tÚ n,j u n Ă V e ℓ for 1 ď j ď N are bounded. Hence, after restricting to subsequences, we may assume that they are convergent: × i " lim n × n,i for 1 ď i ď M and Ú j " lim n Ú n,j for 1 ď j ď N. Consequently we have We are now ready to prove the main result of this paper by combining a truncated GNS construction with extending matrix tuples into the domain of a rational expression as in Proposition 3.5. Theorem 5.2 (Rational convex Positivstellensatz). Let L be a hermitian monic pencil and r P R C pxq. If Q 2τ prq`1 is as above, then rpXq ľ 0 for every X P hdom r X DpLq if and only if Ö P Q 2τ prq`1 . Furthermore, (5.2) xLpXqÚ, Úy " λpÚ˚LÚq ą 0 for all Ú P V e ℓ`1 , where the canonical extension of x¨,¨y to a scalar product on C e b V ℓ`1 is considered. Let B be an ordered orthogonal basis of V with respect to the inner product p¨,¨q as in Section 4; recall that such a basis has the property that B X V k is a basis for V k for all k P N. Let B 0 be an ordered orthogonal basis of V ℓ`2 with respect to x¨,¨y that contains a basis for V ℓ`1 , and let B 1 " BzV ℓ`2 . If we identify operators X j with their matrix representations relative to the ordered basis pB 0 , B 1 q of V , then X j P M 8 pCq are hermitian matrices by Lemma 4.2 and (5.1). Let U 0 be the orthogonal complement of V ℓ`1 in V ℓ`2 relative to x¨,¨y. Since X j pV ℓ`1 q Ď V ℓ`2 , we can consider the restriction X j | V ℓ`1 in a block form X j Y jw ith respect to the decomposition V ℓ`2 " V ℓ`1 ' U 0 . Since X P dom 8 r, by Proposition 3.5 there exist a finite-dimensional vector space U 1 , a scalar product on V ℓ`1 ' U 0 ' U 1 extending x¨,¨y, an operator E on U 0 ' U 1 , and a d-tuple Z of hermitian operators on U 0 ' U 1 such that Since X j pV ℓ q Ď V ℓ`1 , we conclude that Observe that for all but finitely many ε 1 , ε 2 ą 0 we can replace Z, E with ε 1 Z, ε 2 E and (5.3) still holds. By (5.2) we can thus assume that Z and E are close enough to 0 so that Lp r Xq ľ 0. Finally, since (5.4) holds and 2τ prq`τ p1q " ℓ`2, Proposition 4.3 implies xrp r Xq1, 1y " xÖ, 1y " λpÖq ă 0. Therefore r X P hdom r X DpLq and rp r Xq is not positive semidefinite. Given a unital˚-algebra A and A " A˚P M ℓ pAq, the quadratic module in A generated by A is Theorem 5.2 then in particular states that noncommutative rational functions positive semidefinite on a free spectrahedron DpLq belong to QM C p ăx q ą pLq. Remark 5.3. Let r P R C pxq and n " 2`e 3 m 2`e mp2eℓ´1q`ℓpeℓ´2qw and e is the size of a linear representation of r. If r ń 0 on hdom r X DpLq, then by Remark 3.4 and the proofs of Theorem 5.2 and Proposition 3.5 there exists X P hdom n r X D n pLq such that rpXq ń 0. The solution of Hilbert's 17th problem for a free skew field is now as follows. Proof. By Proposition 2.1 there exists r P Ö such that hdom Ö " hdom r. The corollary then follows directly from Theorem 5.2 applied to L " 1 since the hermitian domain of an element in V 2τ prq contains hdom Ö. Remark 5.5. Corollary 5.4 also indicates a subtle distinction between solutions of Hilbert's 17th problem in the classical commutative context and in the free context. While every (commutative) positive rational function ρ is a sum of squares of rational functions, in general one cannot choose summands that are defined on the whole real domain of the original function ρ. On the other hand, a positive noncommutative rational function always admits a sum-of-squares representation with terms defined on its hermitian domain. For a possible future use we describe noncommutative rational functions whose invertible evaluations have nonconstant signature; polynomials of this type were of interest in [HKV20, Section 3.3]. Corollary 5.6. Let Ö " Ö˚P C p ăx q ą. The following are equivalent: (i) there are n P N and X, Y P hdom n Ö such that ÖpXq, ÖpY q are invertible and have distinct signatures; (ii) neither Ö or´Ö equals Proof. (i)ñ(ii) If˘Ö " ř i Ö i Öi , then˘ÖpXq ľ 0 for all X P hdom Ö. (ii)ñ(i) Let O n " hdom n Ö X hdom n Ö´1. By Remark 2.5 there is n 0 P N such that O n ‰ H for all n ě n 0 . Suppose that Ö has constant signature on O n for each n ě n 0 , i.e., ÖpXq has π n positive eigenvalues for every X P O n . Since O k ' O ℓ Ă O k`ℓ for all k, ℓ P N, we have (5.5) nπ m " π mn " mπ n for all m, n ě n 0 . If π n 1 " n 1 for some n 1 ě n 0 , then π n " n for all n ě n 0 by (5.5), so Ö ľ 0 on O n for every n. Thus Ö " ř i Ö i Öi by Theorem 5.2. Analogous conclusion holds if π n 1 " 0 for some n 1 ě n 0 . However, (5.5) excludes any alternative: if n 0 ď m ă n and n is a prime number, then 0 ă π n ă n contradicts (5.5). Positivity and invariants. Let G be a subgroup of the unitary group U d pCq. The action of G on C d induces a linear action of G on C p ăx q ą. If G is finite and solvable, then the subfield of G-invariants C p ăx q ą G is finitely generated [KPPV20, Theorem 1.1] and in many cases again a free skew field [KPPV20, Theorem 1.3]. Furthermore, we can now extend [KPPV20, Corollary 6.6] to invariant noncommutative rational functions with singularities. Corollary 5.7. Let G Ă U d pCq be a finite solvable group. Then there exists R G P GL |G| pC p ăx q ąq with the following property. If Ö P C p ăx q ą G and L is a hermitian monic pencil of size e, then Ö ľ 0 on hdom Ö X DpLq if and only if Ö P QM C p ăx q Proof. Combine [KPPV20, Corollary 6.4] and Theorem 5.2. 5.3. Real free skew field and other variations. In this subsection we explain how the preceding results apply to real free skew fields and their symmetric evaluations, and to another natural involution on a free skew field. Proof. If Ö P R p ăx q ą and Ö ľ 0 on hdom ÖXDpLq, then Ö P QM CbR p ăx q ą pLq by Theorem 5.2 because the complex vector spaces V ℓ are spanned with functions given by subexpressions of some r P Ö, and we can choose r in which only real scalars appear. For × P C b R p ăx q ą we define rep×q " 1 2 p×`×q and imp×q " i ąq and Ú k P pC b R p ăx q ąq e , then Ö " repÖq " ÿ j prep× j q˚rep× j q`imp× j q˚imp× j qq`ÿ k prepÚ k q˚L repÚ k q`impÚ k q˚L impÚ k qq and so Ö P QM R p ăx q ą pLq. Given Ö P R p ăx q ą one might prefer to consider only the tuples of real symmetric matrices in the domain of Ö, and not the whole hdom Ö. Since there exist˚-embeddings M n pCq ãÑ M 2n pRq, evaluations on tuples of real symmetric 2nˆ2n matrices carry at least as much information as evaluations on tuples of hermitian nˆn matrices. Consequently, all dimension-independent statements in this paper also hold if only symmetric tuples are considered. However, it is worth mentioning that for Ö P R p ăx q ą, it can happen that dom n Ö contains no tuples of symmetric matrices for all odd n, e.g. if Ö " px 1 x 2´x2 x 1 q´1. Another commonly considered free skew field with involution is C p ăx, x˚q ą, generated with 2d variables x 1 , . . . , x d , x1, . . . , xd, which is endowed with the involution˚that swaps x j and xj . Elements of C p ăx, x˚q ą can be evaluated on d-tuples of complex matrices. The results of this paper also directly apply to C p ăx, x˚q ą and such evaluations because C p ăx, x˚q ą is freely generated by elements 1 2 px j`xj q, i 2 pxj´x j q which are fixed by˚. Finally, as in Corollary 5.8 we see that a suitable analog of Theorem 5.2 also holds for R p ăx, x˚q ą and evaluations on d-tuples of real matrices. 5.4. Examples of non-convex Positivstellensätze. Given Ñ " Ñ˚P M ℓ pC p ăx q ąq let where D n pÑq " tX P hdom n Ñ : ÑpXq ľ 0u, be its positivity domain. Here, the domain of Ñ is the intersection of domains of its entries. Proposition 5.9. Let Ñ " Ñ˚P M ℓ pC p ăx q ąq and assume there exist a hermitian monic pencil L of size e ě ℓ, a˚-automorphism ϕ of C p ăx q ą, and A P GL e pC p ăx q ąq such that (5.6) ϕpÑq ' I " A˚LA. Proof. The relation (5.6), Remark 2.5 and the convexity of D n pLq imply that the sets D n pϕpÑqq and D n pÑq have the same closures as their interiors in the Euclidean topology for all but finitely many n. Therefore Ö| hdom ÖXDpÑq ľ 0 ô ϕpÖq| hdom ϕpÖqXDpϕpÑqq ľ 0 ô ϕpÖq| hdom ϕpÖqXDpLq ľ 0 by Theorem 5.2 and (5.6). The following example presents a family of quadratic noncommutative polynomials q " q˚P Căx, x˚ą that admit a rational Positivstellensatz on their (not necessarily convex) positivity domains Dpqq " tX : qpX, X˚q ľ 0u. One might say that q is a hereditary quadratic polynomial of positive signature 1. Note that D 1 pqq is not convex if a 0 R C. Since a 0 , . . . , a n are linearly independent affine polynomials in Căxą (and in particular n ď d), there exists a linear fractional automorphism ϕ on C p ăx q ą such that ϕ´1px j q " a j a´1 0 for 1 ď j ď n. We extend ϕ uniquely to å -automorphism on C p ăx, x˚q ą. Then ϕpa 0 q´˚ϕpqqϕpa 0 q´1 " 1´x1x 1´¨¨¨´xn x n and thus ϕpqq ' I n " A˚LA where Therefore Ö ľ 0 on hdom ÖXDpqq if and only if Ö P QM C p ăx,x˚q ą pqq for every Ö P C p ăx, x˚q ą by Proposition 5.9. For example, the polynomial x1x 1´1 if of the type discussed above, and thus admits a rational Positivstellensatz. In particular, On the other hand, we claim that x 1 x1´1 R QM Căx,x˚ą px1x 1´1 q (cf. [HM04, Example 4]). If x 1 x1´1 were an element of QM Căx,x˚ą px1x 1´1 q, then the implication S˚S´I ľ 0 ñ SS˚´I ľ 0 would be valid for every operator S on an infinite-dimensional Hilbert space; however it fails if S is the forward shift operator on ℓ 2 pNq. A different Positivstellensatz (polynomial, but with a slack variable) for hereditary quadratic polynomials is given in [HKV20, Corollary 4.6]. 5.5. Eigenvalue optimization. Theorem 5.2 is also essential for optimization of noncommutative rational functions. Namely, it implies that finding the eigenvalue supremum or infimum of a noncommutative rational function on a free spectrahedron is equivalent to solving a semidefinite program [BPT13]. This equivalence was stated in [KPV17, Section 5.2.1] for regular noncommutative rational functions; the novelty is that Theorem 5.2 now confirms its validity for noncommutative rational functions with singularities. Let L be a hermitian monic pencil of size e, and let Ö " Ö˚P C p ăx q ą. Suppose we are interested in Choose some r P Ö (the simpler representative the better) and let ℓ " 2τ prq`1. Theorem 5.2 then implies that where we can take M " dim S 2ℓ`1 and N " dim S 2ℓ` where H is a pdim V ℓ qˆpdim V ℓ q hermitian matrix and w is a vectorized basis of V ℓ . For constrained eigenvalue optimization (L is present), one can set up a similar semidefinite program using localizing matrices [BKP16, Definition 1.41]. More on domains In this section we prove two new results on (hermitian) domains. One of them is the aforementioned Proposition 2.1, which states that every noncommutative rational function admits a representative with the largest hermitian domain. The other one is Proposition 6.3 on cancellation of singularities of noncommutative rational functions. 6.1. Representatives with the largest hermitian domain. We will require a technical lemma about matrices over formal rational expressions and their hermitian domains. A representative of a matrix Ñ over C p ăx q ą is a matrix over R C pxq of representatives of Ñ ij , and the domain of a matrix over R C pxq is the intersection of domains of its entries. Lemma 6.1. Let m be an eˆe matrix over R C pxq such that Ñ P GL e pC p ăx q ąq. Then there exists s P Ñ´1 such that hdom m X hdom Ñ´1 " hdom s. Proof. Throughout the proof we reserve italic letters (m, c, etc.) for matrices over R C pxq and bold letters (Ñ, , etc.) for the corresponding matrices over C p ăx q ą. We prove the statement by induction on e. If e " 1, then m´1 is the desired expression. Assume the statement holds for matrices of size e´1, and let c be the first column of m. Then hdom c Ě hdom m and cpXq is of full rank for every X P hdom m X hdom Ñ´1. Hence (6.1) hdompc˚cq´1 Ě hdom m X hdom Ñ´1. The entries of pÑ˚Ñq´1 can be represented by expressions s 1 ij which are sums and products of expressions m ij , mi j , pc˚cq´1, p s ij . Thus s 1 P pÑ˚Ñq´1 satisfies hdom m X hdom Ñ´1 " hdom s 1 by (6.4). Finally, s " s 1 m˚is the desired expression because Ñ´1 " pÑ˚Ñq´1Ñ˚. Proof of Proposition 2.1. Let Ö P C p ăx q ą. Let e P N, an affine matrix pencil M of size e and u, v P C e be such that Ö " u˚M´1v in C p ăx q ą, and e is minimal. Recall that dom Ö " Since M contains no inverses, it is defined at every matrix tuple; thus by Lemma 6.1 there is a representative of M´1 whose hermitian domain equals tX " X˚: det MpXq ‰ 0u. Since Ö is a linear combination of the entries in M´1, there exists r P Ö such that hdom r " hdom Ö by (6.5). Since Ñ´1 " pÑ˚Ñq´1Ñ˚"˜‹ ‹ p Ñ´1px 2 x 1`x4 x 3 qpx 2 x 2 x 1`x4 x 3 qpx 2 1`x 2 3 q´1px 1 x 2`x3 x 4 q˘´1`x 4´p x 2 x 1`x4 x 3 qpx 2 1`x 2 3 q´1x 3ȓ epresents Ö, and its hermitian domain coincides with hdom Ö. Of course, the expression px 4´x3 x´1 1 x 2 q´1 is a much simpler representative of Ö. 6.2. Cancellation of singularities. In the absence of left ideals in skew fields, the following proposition serves as a rational analog of Bergman's Nullstellensatz for noncommutative polynomials [HM04, Theorem 6.3]. The proof below omits some of the details since it is a derivate of the proof of Theorem 5.2. Proposition 6.3. The following are equivalent for Ö, × P C p ăx q ą. (i)ñ(ii) Suppose (ii) does not hold; thus there are r P Ö, s P × and Y P dom r X dom s such that det rpY q " 0. Similarly as in Section 4 denote R " t1u Y tq P R C pxqzC : q is a subexpression of r or su and let R be its image in C p ăx q ą. We also define finite-dimensional vector spaces V ℓ and the finitely generated algebra V as before. The left ideal V Ö in V is proper: if ÕÖ " 1 for q P V , then qpY qrpY q " I since Y P dom q, which contradicts det rpY q " 0. Furthermore, × R V Ö since (ii) does not hold. Let K " V {V Ö, and let K ℓ be the image of V ℓ for every ℓ P N. Let X j : K Ñ K be the operator given by the left multiplication with x j ; note that X j pK ℓ q Ď K ℓ`1 for all ℓ. By induction on the construction of q P R it is straightforward to see that qpXq is well-defined for every q P R. Let ℓ " 2 maxtτ prq, τ psqu´2. By Proposition 3.6 there exist a finite-dimensional vector space U and a d-tuple of operators X on K ℓ`1 ' U such that X P dom r X dom s and X j | K ℓ " X j | K ℓ for j " 1, . . . , d. A slight modification of Proposition 4.3 implies that rpXqr1s " rÖs " 0, spXqr1s " r×s ‰ 0 where rÕs P K denotes the image of Õ P V . The implication (i)ñ(ii) in Proposition 6.3 fails if only hermitian domains are considered (e.g. take Ö " x 2 1 and × " x 1 ). It is also worth mentioning that while Proposition 6.3 might look rather straightforward at first glance, there is a certain subtlety to it. Namely, the equivalence in Proposition 6.3 fails if only matrix tuples of a fixed size are considered. For example, let Ö " x 1 and × " x 1 x 2 ; then dom 1 Ö X dom 1 × " C 2 and ker ÖpXq Ď ker ×pXq for all X P C 2 , but dom 1 p×Ö´1q " Czt0uˆC (cf. [Vol17, Example 2.1 and Theorem 3.10]).
2021-01-08T02:15:28.088Z
2021-01-07T00:00:00.000
{ "year": 2021, "sha1": "0054530ee8c5e3f0d071d337103d7e9ae5fba597", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "0054530ee8c5e3f0d071d337103d7e9ae5fba597", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
195791612
pes2o/s2orc
v3-fos-license
Spatially-Coupled Neural Network Architectures In this work, we leverage advances in sparse coding techniques to reduce the number of trainable parameters in a fully connected neural network. While most of the works in literature impose $\ell_1$ regularization, DropOut or DropConnect techniques to induce sparsity, our scheme considers feature importance as a criterion to allocate the trainable parameters (resources) efficiently in the network. Even though sparsity is ensured, $\ell_1$ regularization requires training on all the resources in a deep neural network. The DropOut/DropConnect techniques reduce the number of trainable parameters in the training stage by dropping a random collection of neurons/edges in the hidden layers. However, both these techniques do not pay heed to the underlying structure in the data when dropping the neurons/edges. Moreover, these frameworks require a storage space equivalent to the number of parameters in a fully connected neural network. We address the above issues with a more structured architecture inspired from spatially-coupled sparse constructions. The proposed architecture is shown to have a performance akin to a conventional fully connected neural network with dropouts, and yet achieving a $94\%$ reduction in the training parameters. Extensive simulations are presented and the performance of the proposed scheme is compared against traditional neural network architectures. Introduction Deep neural networks excel at many tasks but usually suffer form two problems: 1) they are cumbersome, and difficult to deploy on embedded devices (Crowley et al., 2018) and 2) they tend to overfit the training data (Hawkins, 2004). Several approaches have been proposed to overcome each of them. DropOut (Srivastava et al., 2014), DropConnect (Wan * Equal contribution 1 Department of Electrical and Computer Engineering, Texas A&M University, College Station, Texas, USA. Correspondence to: Arman Hasanzadeh <armanihm@tamu.edu>. Preliminary work. et al., 2013) and wight-decay (Krogh & Hertz, 1992) are a few examples of techniques that try to overcome overfitting. DropOut drops a random subset of neurons and all the edges connected to them during the training phase while in Drop-Connect a random subset of edges are dropped during the training phase. Weight-decay method uses 2 regularization to enforce uniformity among weights. However, all of these approaches induce sparsity only during the training phase and testing is performed on the averaged network which is not sparse. Hence, there is no reduction in the memory usage. Pruning methods try to compress a deep neural network to be more memory efficient. A simple, yet popular technique which uses hard thresholding to prune the weights close to zero has received significant attention (Reed, 1993). Weight-elimination (Weigend et al., 1991), group LASSO regularizer (Scardapane et al., 2017) and L 1/2 regularizer (Li et al., 2018) are examples of pruning techniques. Even thought all these approaches have shown a lot of promise in sparsifying the network, they are computationally taxing schemes, since all the weights of the network need to be optimized over training data. In this work, first, we empirically show that 1 regularization results in a paradigm where the out degree of a neuron is representative of its importance. Then, we propose a random sparse neural network which trains on a far fewer parameters than a fully connected deep neural network without much degradation in the test performance. Our approach uses feature importance to order the inputs in the first layer and inputs with lesser importance are allotted lesser trainable parameters. This sparsity structure is maintained across all the layers by using spatially-coupled sparse constructions (inspired by spatially-coupled LDPC codes) to maintain block sparsity. Our proposed architecture is pruned before training and avoids overfitting. Hence, it is both computationally efficient and memory efficient. Proposed Sparse Construction In most of the learning problems at hand, features are transformed into a space that separates the data well to aid the learning task. But traditional neural networks don't take advantage of the inherent ordering in the features which is based on an importance measure. Our proposed architec-ture is a feed-forward neural network that is designed to take advantage of the side information in the input features to allocate the trainable parameters efficiently. The first step in our model is to transform and rank input features based on their importance. Two of such transformations that well suit our model are principal component analysis (PCA) (Wold et al., 1987) and random forest (RF) feature importance (Liaw et al., 2002). After applying one of the feature transformation methods to the input features, we get transformed input x = [x 1 , x 2 , · · · , x N ] where the elements in x are ordered in the decreasing order of importance measure specific to the transformation. In the next section first we define spatially-coupled (SC) layer and then show how to use feature importnace information in a SC layer. Spatially-Coupled (SC) Layer First, we define random sparse (RSP) layer which is the building block of a SC layer. Let z (l) denote the output vector of the l-th layer of the neural network. Also assume that W (l) and b (l) denote the weights and biases at layer l, respectively. The output of RSP layer l is given by where act(.) is the activation function. Although the mathematical formulation in (1) is similar to traditional fully connected (FC) layer, the fundamental difference is that supp(W (l) ) is the binary adjacency matrix of a random bipartite graph (Tanner graph) as opposed to all ones matrix in the FC case. More specifically, RSP construction imposes sparsity in the bipartite graph between layers, in which each edge denotes a weight between two neurons in layers (l − 1) and l, in the same way as how low density parity check (LDPC) codes are constructed (Gallager, 1962). i shows the out degree of i-th neuron at layer l, [K] denotes the set of integers {1, · · · , K} and N (l) represents the number of neurons at layer l. Then, the RSP layer is constructed as follows. 1. Pick a degree distribution λ(d; P) parameterized by the set of parameters P. Draw from the chosen degree distribution λ(d; P). 3. For each neuron i in layer (l−1), uniformly pick d Notice that unlike FC layer, not every neuron in layer l − 1 is connected to every neuron in layer l. In other words, the out degree of i-th neuron at layer l − 1 is much less than the number of nodes in layer l, i.e. d Given the construction of a RSP layer, now we define the SC layer. Here, the neurons in each layer l are partitioned into B (l) blocks of equal size and the neurons in a block of layer l are randomly connected (locally) to neurons in a few adjacent blocks of layer l − 1. The number of adjacent blocks that participate in the local connections at layer l is called the receptive field of the layer and denoted by r (l) (see Figure 1). More specifically, the construction of SC layer is done as follows: 1. Consider a window of r (l−1) adjacent blocks with block indices {i, · · · , i + r (l−1) − 1} from layer l − 1 and block index i from layer l and construct a RSP layer locally. 2. Repeat step 1 for each of the blocks and choose a random instance of RSP for each of those windows. We choose a simple left regular degree distribution within each block. As pointed out before, we allocate resources (trainable parameters) according to the importance in features. The degree of each neuron is equivalent to the amount of resources that is allocated to that neuron. As our input is ordered based on some importance measure, we allocate high degree to the blocks with higher important features and low degree to the blocks with lower important features. By construction, the intermediate layers also have the same ordering of feature importance and hence at each layer, we allocate degrees proportional to the importance measure. Experiments and Discussion We trained spatially-coupled neural network for classification problem on fashion MNIST dataset (Xiao et al., 2017). Fashion MNIST data set consist of 70000 samples of 28×28 gray-scale images, each one associated with a label from In all of our experiments, we deployed neural networks with 5 hidden layers with 784 neurons each and an output layer with 10 neurons. We used sigmoid activation function, cross-entropy loss with 2 regularization and regularization parameter of 5 × 10 −5 except for FC architecture. In FC architecture, we used DropOut with keeping probability of 0.5. We also compared our method with RSP construction with left regular degree distribution. The degree of each node is set to be 53. We form our proposed SC neural network with 8 blocks in all layers. The RSP construction of each block is left regular. The out degrees of the neurons in blocks are set to be {98, 130, 98, 49, 20, 10, 10, 10}. We note that the number of parameters in RSP and SC constructions are approximately 93% less than the fully connected case. The findings reported in Figure 2 fortify the proposed framework of allocating more resources to features of higher importance. We trained a FC neural network classifier (with the same architecture as mentioned above) with 1 regularization of 10 −4 on input layer weights and 10 −5 on other weights. PCA was used to sort features in descending order. The number of edges with absolute weight greater than 0.1 (a representative of contributing edges) emerging from an input neurons is plotted in Figure 2. It can be seen that it decays rapidly as the importance of features goes down which validates our choice of SC graph constructions. Table 1 summarizes the results of classification task. If the input to the models is ordered PCA features, spatiallycoupled neural network shows the best accuracy, 87.18%. Comparing SC with FC neural network shows that FC, even with 2 regularization and DropOut, tend to overfit the data because of the huge number of parameters in the model. To show that ordering of features is crucial in SC construction, we repeated the experiments with reversed order of features, i.e. assigning high degrees to less important features and vice versa. It can be seen that in reverse PCA case, SC shows a very poor performance while the other methods maintain their performance as they are permutation invariant. In RF reverse case, SC is very close to the best performance. The difference in the two cases, that causes drastic change in performance, is the quantization of feature importance. The PCA has a small number of very important features (few high variance features) and large number of feature which are not important (many low variance features), thus assigning a very low degree to all of the high variance features degrading the accuracy substantially. However, RF reorders the input which tend to give us many equally important feature and some less important features. Therefore, all of the important features are not diminished and some of them will have high degree in SC model. One important property of SC construction is that it is preserves the feature importance ordering throughout the network. We validated this empirically by measuring the feature importance at each layer. Besides better interpretability of the model compared to FC neural networks, a nice application of this property is that at every layer we can prune the lower blocks after training while maintaining the overall accuracy. This can lead us to a highly sparse structure which can reduce the model complexity even more than 95% with approximately the same performance. An avenue for future work is how to learn these class of transformation that respects the network using fully connected layers.
2019-07-03T17:37:35.000Z
2019-07-03T00:00:00.000
{ "year": 2019, "sha1": "a273c180977c3ab3b8a96c613b30fe1d55d338e3", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "a273c180977c3ab3b8a96c613b30fe1d55d338e3", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
260337484
pes2o/s2orc
v3-fos-license
Pancreatic ductal adenocarcinoma arising from the pancreatic parenchyma compressed by a huge pancreatic lipoma: a case report Background Pancreatic lipomas (PLs) arising from the adipose tissue in the pancreatic parenchyma are rare among pancreatic tumors. Coexisting pancreatic ductal adenocarcinoma (PDAC) and PLs have not been previously reported. Herein, we report a case of PDAC arising from the pancreatic parenchyma with chronic pancreatitis compressed by a large PL. Case presentation The patient was a 69-year-old male. He had been diagnosed with a PL using computed tomography (CT) 12 years previously. The tumor had been slowly growing and was followed up carefully because of the possibility of well-differentiated liposarcoma. During follow-up, laboratory data revealed liver damage and slightly elevated levels of inflammatory markers. Contrast-enhanced CT revealed the previously diagnosed 12 cm pancreatic head tumor and an irregular isodensity mass at the upper margin of the tumor that invaded and obstructed the distal common bile duct. Magnetic resonance cholangiopancreatography demonstrated no specific findings in the main pancreatic duct. Based on these imaging findings, the patient underwent endoscopic retrograde biliary drainage and bile duct brushing cytology, which revealed indeterminate findings. The differential diagnosis of the tumor at that time was as follows: (1) pancreatic liposarcoma (focal change from well-differentiated to dedifferentiated, not lipoma), (2) distal cholangiocarcinoma, and (3) pancreatic cancer. After the cholangitis improved, a pancreatoduodenectomy was performed. Histologically, hematoxylin–eosin staining revealed moderately differentiated PDAC compressed by proliferating adipose tissue. The adipose lesion showed homogeneous adipose tissue with no evidence of sarcoma, which led to a diagnosis of lipoma. Additionally, extensive fibrosis of the pancreatic parenchyma and atrophy of the acinar cells around the lipoma was suggestive of chronic pancreatitis. The pathological diagnosis was PDAC (pT2N0M0 pStage Ib) with chronic pancreatitis and PL. The postoperative course was uneventful, and the patient was discharged on the 15th day after surgery. The patient received adjuvant chemotherapy and has remained recurrence-free for more than 6 months. Conclusions PL may be associated with the development of PDAC in the surrounding inflammatory microenvironment of chronic pancreatitis. In cases of growing lipomas, careful radiologic surveillance may be needed not only for the possibility of liposarcoma but also for the coincidental occurrence of PDAC. Introduction Although lipomas are one of the most common benign mesenchymal tumors, pancreatic lipomas (PLs), arising from adipose tissue in the pancreatic parenchyma, are extremely rare among pancreatic tumors [1]. Previous studies have reported that 0.012-0.08% of consecutive patients undergoing computed tomography (CT) or magnetic resonance imaging (MRI) were incidentally diagnosed with PL; however, the accurate incidence of PL remains unclear [2,3]. Lipomas must be strictly distinguished from liposarcomas, which are malignant tumors derived from adipose tissues. However, this is sometimes difficult because of radiographic similarities, especially with well-differentiated liposarcomas [4]. Surgical resection of lipomas is considered in cases of suspected liposarcomas with invasion of other organs or the presence of solid components [5]. On the other hand, cases of overlapping tumors between PLs and other pancreatic tumors are extremely rare; only one case of PL with benign intraductal papillary mucinous neoplasm has been reported to date [6]. Furthermore, cases of coexisting pancreatic ductal adenocarcinoma (PDAC) and PL have not been reported. Herein, we report a case of PDAC arising from the pancreatic parenchyma with chronic pancreatitis compressed by a large PL. Case presentation The patient was a 69-year-old male. He was referred to our department because of the detection of a pancreatic mass on a screening contrast-enhanced CT screening. Initial contrast-enhanced CT revealed a well-defined 6 cm mass composed of homogeneous adipose tissue without any solid nodules in the pancreatic head, which led to the diagnosis of PL (Fig. 1A, B). In addition to these imaging findings, since he had no symptoms, he was followed up with MRI once a year. Because the tumor had been slowly growing to 12 cm over 11 years, we considered the possibility that it was a well-differentiated liposarcoma and continued to follow the patient closely. During follow-up, serum analysis at his local clinic showed liver damage: aspartate aminotransferase 257 U/L, alanine aminotransferase 295 U/L, alkaline phosphatase 322 U/L, gamma-glutamyltranspeptidase, 1255 U/L; total bilirubin 1.9 mg/dL, slightly elevated inflammatory markers, white blood cell count of 4700, and C-reactive protein level was 0.3 mg/dL, and he was referred to our hospital. Serum tumor markers, including carcinoembryonic antigen, carbohydrate antigen 19-9, and DUPAN-2, were elevated (6.2 ng/ml, 81 U/ml, and 374 U/ml, respectively). Contrast-enhanced CT showed the previously diagnosed 12 cm tumor in the pancreatic head (Fig. 1C, D) and an irregular isodensity mass at the upper margin of the tumor (Fig. 1E) that invaded and obstructed the distal common bile duct. MRI showed similar findings as contrast-enhanced CT ( Fig. 2A, B) and diffusion-weighted imaging (DWI) showed high signal intensity in the mass at the upper margin of the tumor (Fig. 2C). Magnetic resonance cholangiopancreatography (MRCP) revealed no stenosis or other specific findings in the main pancreatic duct (Fig. 2D). Based on these imaging findings, the patient underwent endoscopic retrograde biliary drainage and bile duct brushing cytology, which revealed indeterminate findings. The differential diagnosis of the tumor at that time was as follows: (1) pancreatic liposarcoma (focal change from well-differentiated to dedifferentiated, not a lipoma), (2) distal cholangiocarcinoma, and (3) pancreatic cancer. After the cholangitis improved, a pancreatoduodenectomy was performed. Intraoperative findings revealed a soft adipose tumor, approximately 12 cm in size, in the pancreatic head. The pancreatic parenchyma of the pancreatic body and tail was normal and soft without chronic pancreatitis and showed no abnormal findings. Histological examination of the surgical specimen showed a 12 cm adipose tumor in the pancreatic head (Fig. 3A). Figure 3B shows a whitish tumor, located around the adipose lesion and invaded the common bile duct. The largest tumor measured 35 mm in diameter. Histologically, hematoxylin and eosin staining revealed irregularly distributed ducts with coarsely granular chromatin and enlarged nuclei in the pancreatic parenchyma compressed by proliferating adipose tissue, with a diagnosis of moderately differentiated PDAC (Fig. 4A, B). Microscopically, the tumor showed lymphatic, venous, perineural, bile duct, and retroperitoneal invasions. The adipose lesion presented homogeneous adipose tissue with no evidence of sarcoma, which led to a diagnosis of lipoma. Additionally, extensive fibrosis of the pancreatic parenchyma and atrophy of the acinar cells around the lipoma were observed, suggesting chronic pancreatitis (Fig. 4C). The resection margins were free of tumor cells, and there were no metastases to the regional lymph nodes. Taken together, the pathological diagnosis was PDAC (pT2N0M0, pStage Ib), with chronic pancreatitis and PL. The postoperative course was uneventful, and the patient was discharged on the 15th day after surgery. The patient underwent adjuvant chemotherapy for PDAC and has remained recurrence-free for more than 6 months. Discussion PLs are generally asymptomatic, rare, and benign pancreatic tumors that are commonly incidentally diagnosed based on radiological images [7]. Although surgical treatment is unnecessary in most cases, resection should be considered in cases with severe symptoms or suspected malignancy [8,9]. In this case, during long-term followup for the slowly growing adipose tumor, the patient had common bile duct obstruction and the appearance of a contrast-enhanced mass at the margin of the tumor. Initially, we suspected a focal dedifferentiated change from a well-differentiated liposarcoma and performed surgery. As a result, the pathological diagnosis was PDAC coexisting with PL; to the best of our knowledge, this is the first report of a patient with this simultaneous diagnosis. The causes of this misdiagnosis are as follows: first, MRI and CT showed no abnormal findings in the main pancreatic duct; second, the patient presented with an iso-density, linear, scar-like lesion rather than a low-density solid mass typical of PDAC. Therefore, preoperative diagnosis may be difficult. PDAC is one of the most aggressive and lethal cancers and the third leading cause of cancer mortality in the United States [10]. Despite advances in diagnostic and therapeutic techniques for several cancers, it often presents at an advanced stage, contributing to poor 5-year survival rates of 2-9% [11][12][13]. Therefore, a multidisciplinary management approach is recommended for the treatment of PDAC [14]. In recent years, the efficacy of neoadjuvant chemotherapy for improving the prognosis of patients with resectable PDAC has been reported [15][16][17]. In our case, neoadjuvant therapy might have been administered if the tumor had been preoperatively diagnosed as PDAC. However, it was difficult to distinguish lipoma, liposarcoma, PDAC, and cholangiocarcinoma on the preoperative imaging examinations in this case. Under unusual conditions, such as the presence of a large PL, physicians should avoid stereotypes regarding imaging findings for accurate diagnosis. In addition, we did not perform endoscopic ultrasound-guided fine-needle aspiration biopsy, because needle biopsy for sarcomas is not an established approach and sometimes leads to needle tract seeding [18,19]. Thus, we first performed surgery for diagnosis and treatment in the presented case. Interestingly, the histopathological findings revealed chronic pancreatitis, including fibrotic and atrophic changes surrounding the PL. Previous studies have reported that chronic pancreatitis is a major risk factor for PDAC [20,21]. The underlying mechanism of PDAC arising from chronic pancreatitis might be the local influence of the inflammatory process [22]. On the other hand, the relationship between chronic pancreatitis and PL has not been clarified. However, Jairo et al. reported that PLs compress the bile and pancreatic ducts and behave aggressively despite being benign [23]. In our case, the slow-growing large PL may have been related to chronic pancreatitis, possibly due to compression of the pancreatic parenchyma, such as obstruction of branching pancreatic ducts, or local inflammatory processes caused by adipose tissue-derived inflammatory cytokines, including adipokines. Consequently, the presence of a large lipoma may have contributed to the development of PDAC via chronic pancreatitis. However, it is difficult to test this hypothesis. Taken together, although lipomas are histologically benign, careful radiological surveillance is required for large or growing PLs. Conclusion Herein, we describe an extremely rare case of PDAC arising from the pancreatic parenchyma with chronic pancreatitis compressed by a large PL. PL may be related to the development of PDAC through the surrounding inflammatory microenvironment of chronic pancreatitis. In cases of growing lipomas, careful radiologic surveillance may be needed not only for the possibility of liposarcoma but also for the coincidental occurrence of PDAC.
2023-08-01T13:46:32.325Z
2023-08-01T00:00:00.000
{ "year": 2023, "sha1": "ae1f565b333c621d44be576bd89f77d431ebd3af", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "ae1f565b333c621d44be576bd89f77d431ebd3af", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
1780548
pes2o/s2orc
v3-fos-license
ORIGINAL ARTICLE Journal of Nepal Medical Association 2003; 42: 1-5 POSSIBLE OCCUPATIONAL LUNG CANCER IN NEPAL The objective of this study was to describe the relationship between occupational exposures and the development of lung cancer among the patients attending Bhaktapur Cancer Care Center, Bhaktapur, Nepal. The study subjects consisted of 85 cases of lung cancer and a reference group of 40 cases of colon cancer. Demographic data and information about work history was obtained by a structured interview. Based on the occupational history, subjects were divided into exposed and non-exposed groups concerning carcinogenic agents. Exposure prone occupations like agriculture, construction of buildings, construction of roads and bridges, manufacturing, and transport were categorised as exposed occupations. Similarly, occupations like administrative services, business, student and housewives were categorised as non-exposed. Odd ratio (OR) and 95% confidence intervals (CI) were calculated using logistic regression. Adjustments for smoking habit, life long cigarette consumption (smoking pack year), alcohol habit, education level and age were done. The crude OR for the exposed workers was 5.59 (95% CI: 2.47,12.6). After adjustment for smoking habit alone or for smoking habit, smoking pack year, alcohol habit, education level and age, the OR was 4.8 (95% CI: 2.02,11.4) and 4.2 (95% CI: 1.4,12.0), respectively. INTRODUCTION Cancer is a worldwide public health problem and it accounts for an increasing proportion of all deaths.Malignant diseases are the second most frequent cause of death in developed countries (21% of all deaths) after cardiovascular diseases.In developing countries, it accounts for 7% of all deaths.Of the 7.6 million of new cases of cancer occurring each year in the world, 4 million occur in the developing countries.The overall incidence is slightly higher in males than in females. 1 The most frequent cancer type among males in the developing countries is in the lungs, accounting for about 430,000 new cases in 2000 followed by stomach cancer (350,000 new cases in 2000).Lung cancer was the most frequent cancer in the world for males in 2000, with a total estimated number of about 900,000 new cases per year, 47.7% of which occurred in the developing countries.Lung cancer is 2.7 times less frequent among women. The estimated fraction of occupational cancer ranges between 1-20%. 2 The exact proportion can not be determined because of limitations in our current knowledge about the magnitude, duration and the distribution in the population of the exposures to specific carcinogens.Very few data on exposure to carcinogens are available in developing countries.A large proportion of known carcinogens occurs in occupational settings.Doll and Peto stated that a total of 2-8% of cancers is attributable to their occupations.Introductory report of International Labour Office from 1999 stated that around 8% of cancer deaths are attributed to occupation. 3 The occupational environment provides an ideal opportunity for introducing cancer prevention by eliminating or decreasing the exposure.Many studies in occupational cancer epidemiology show the decrease in the risk of cancer-followed prevention. 4idemiological studies have identified a wide range of occupational carcinogens, which have been related to an increase in lung cancer risk.Doll and Peto estimated that 15% of lung cancer in men and 5% in women in USA could be attributed to occupational exposures. 5Vineis and Simonato in a literature review cited attributable risk estimates for occupation and lung cancer that ranged from 4% to 40%. 6våle et al. suggested that 13 to 27% of the lung cancer patients could be attributed to occupational exposures. 7e present study was carried out to analyse the association between lung cancer and occupational exposures in Nepal, where lung cancer is the main cause of mortality from cancer in men. 8o research on cancer has been performed in Nepal.The whole effort of the health personnel is put in treatment of the cancer patients and in awareness arising programs like consciousness about early signs, disadvantages of smoking and alcohol. The objective of the study was to describe the relationship between occupational exposures and the development of lung cancer among patients attending a cancer care hospital in Nepal. METHODS A case-control study was designed.The study subjects consisted of all cases of lung cancer and a reference group of all cases of colon cancer that attended Bhaktapur Cancer Hospital from 1 st July to 16 th October 2001.All subjects were asked individually to participate in the study and agreed verbally to this.Confidentiality of all information about the subjects was assured. Resources were not available to construct a population based control group.Instead, the decision to include colon cancer patients as a reference group was made on the basis that very few occupational carcinogens are known to cause colon cancer. The patients were interviewed in the presence of their close relatives.It was a structured interview, in a questionnaire format.Information about their education, father's occupation, family history of cancer, present and past medical history, diet pattern, smoking habit, history of alcohol intake, present and past heating and cooking system at home, intake of any carcinogenic drug were obtained.A detailed past and present occupational history was recorded for all subjects.Questionnaire about the occupation included information like location of different work places, duration worked in those occupations, types of industries and job duties. Based on the occupational history, subjects were divided into exposed and non-exposed groups concerning carcinogenic agents.Exposure prone occupations like agriculture, construction of buildings, construction of roads and bridges, manufacturing, and transport were categorised as exposed occupations.Similarly, occupations like administrative services, business, student and housewives were categorised as non-exposed. A detailed smoking history was obtained for all subjects who had ever smoked regularly for more than six months.This information comprised age at start of smoking, duration of smoking, amount of tobacco smoked and age at any major change in smoking habits.Subjects who have been smoking for at least six months during the study were considered as smokers, whereas subjects who never had smoked for at least six months were considered as non-smokers.Smoking pack year is the cumulative consumption of cigarette through out the whole life.One smoking pack year denotes the smoking of one pack of cigarette per day for one year. Statistical analysis was done using the statistical program package "Statistical Package for Social Sciences (SPSS)" eleventh version.Odd ratios (OR) and their 95% confidence intervals (95% CI) for lung cancer were estimated for categorical variables using logistic regression models.For continuous variables, p-values with 95% confidence intervals were calculated using independent samples t-tests.Adjustments for smoking habit, smoking pack year, history of alcohol, education level and age were done by the use of multivariate logistic regression analyses. RESULTS There were 85 cases of lung cancer and 40 cases of colon cancer.Among the cases of lung cancer, 68.2% cases were males and 31.8% were females.There were 67.5% males and 32.5% females among the colon cancer cases.Mean age for the lung cancer and colon cancer cases differed significantly between the groups, 59 and 42 years, respectively (Table I).A significantly higher number of the lung cancer cases was uneducated compared to the colon cancer cases (Table II).42.4% of the lung cancer cases and 20% of the colon cancer cases had ever consumed alcohol in the past.The number of non-smokers for the lung cancer cases was low (15.3%) compared to the number of non-smokers in the control group (65%).Mean smoking pack year for the cases and controls were 19 and 3.1, respectively (Table I) Variables like sex, diet, father's history of occupation, family history of cancer, history of carcinogenic drugs, heating and cooking habit were not significantly different between the cases and the controls. Among the cases 23 subjects had worked in non-exposed and 62 in exposed occupations whereas among the controls 27 subjects had worked in non-exposed and 13 in exposed occupations. Odds ratios and 95% confidence intervals for lung cancer are shown in table III.Variables that differed in the lung cancer and colon cancer cases, i.e. smoking habit, alcohol habit, education level, age and smoking pack year were adjusted for. * Analysed by multivariate regression analysis There was a significant excess risk of lung cancer for the exposed workers.The crude OR for the exposed workers was 5.59 (95% CI: 2.47,12.6).After adjustment for smoking habit alone and for smoking habit, alcohol habit, smoking pack year, education and age altogether the OR was 4.8 (95% CI: 2.02,11.4)and 4.2 (95% CI: 1.4,12), respectively. DISCUSSION We found a high odd ratio for lung cancer among the exposed workers even after adjustment with potential confounders like smoking habit, smoking pack year, age and education level.This was found despite the low number of cases in our study.Our 95% confidence interval was wide due to the small sample size. The study subjects consisted of 85 cases of lung cancer and a reference group of 40 cases of colon cancer.The decision to include colon cancer patients as a reference group was made on the basis that occupation in general is likely to play a small role in colon cancer aetiology, with perhaps its major contribution an indirect one via physical activity. 9[12][13][14] A potential limitation of this study is information bias.With regard to information bias, two sources must be considered: reliance on type of industries for determining exposure status and use of occupational histories obtained by interviews, which were not further validated. In the context of this study, self reported information about the type of industries was the only feasible approach for classifying exposure status.The study was hospital based and the reported industries were in diverse geographic locations.Due to the time and logistics constraints, the subjects' tasks could not be validated from the respective industries.Grouping by different exposures was thus not possible.Non-differential misclassification can be anticipated from this approach.Such non-differential misclassification would bias the OR towards no difference. 15bjects were divided into exposed and non-exposed groups concerning carcinogenic agents.This was a rather rough categorisation due to the low number of cases, giving a group of very mixed exposure to carcinogens.The categorisation of housewives as non-exposed might be questioned since they might be exposed to volatile organic compounds (VOC) and polycyclic aromatic hydrocarbons (PAH) everyday. 16,17[19][20] No previous studies on occupational cancer have been done in Nepal.Several studies carried out in developed countries showed an association between occupational exposures and lung cancer.Comparison with such studies is of limited value because of potential difference in work environment exposures, between Nepal and developed countries. In several studies, the lung cancer risk in subjects working in agriculture, construction, driving, manufacture, and the combination of different occupations was found to be higher compared to administrative staff workers.This was found also after adjustments for smoking.This is in concordance with the findings of the present study. 2][23][24][25][26] An elevated risk of lung cancer has been observed among sugarcane farmers. 21,25A study done in Gaza showed a highly significant positive correlation between the use of pesticides and lung cancer incidence. 22Barthel E observed a high incidence of lung cancer in agriculture workers with chronic occupational exposure to pesticides. 23Exposure to agents like herbicides (MCPA), insecticides (DDT, HCH and Toxaphen), organic phosphorus compounds (Parathion), organic nitro derivatives and fungicides have been related to these findings. 23,24rthermore, high risks for lung cancer have been observed for the agriculture workers (OR= 1.8, 95% CI = 1.1, 3.1), drivers (OR= 1.9, 95% CI= 1.1,4.0)and construction workers (OR= 2.5, 95% CI= 1.0, 5.9) in a hospital based case-control study done by Pezzotto and Poletto in Argentina. 27r the workers in the construction industry the mortality odds ratio (135 95% CI: 129, 140) for lung cancer is high. 28A cohort study done by Rafnsson V et al. showed an increased risk of lung cancer among the masons, which could be due to exposure to hexavalent chromium in the cement. 29 a hospital based case-control study Matos E et al. observed elevated odd ratios for lung cancer among employees in the alcoholic beverages industry (4.5, 95% CI: 1.02-20.2),sawmills and wood mills (4.6, 95% CI: 1.1-18.4)and chemicals / plastic manufacturers (1. 8, 95% CI: 1.04-3.2). 30 Hospital based case-control studies using only cancer controls like in the present study were not found in the literature.Previous lung cancer studies are cohort studies, hospital based case-control studies with non-cancer hospital controls or/and population controls, and population based case-control studies. Previous epidemiological studies done in other developing countries strongly indicates that there are many occupations and industries in Nepal with possible exposure to carcinogens.The government bodies do not have full information about the types of occupational carcinogens present in the industries in Nepal.It is very difficult to achieve information about details concerning carcinogens in the industry in Nepal, but there seems to be a large number of workers who are exposed to different carcinogens in a large number of industries. This study is one of the first studies on occupational cancer in Nepal.The study demonstrates the need for further research on occupational cancer in Nepal with larger populations, refining different occupational groups.
2014-10-01T00:00:00.000Z
2003-01-01T00:00:00.000
{ "year": 2003, "sha1": "1140abb43fc3e87acca73ae6ee5556c2f9968b07", "oa_license": "CCBY", "oa_url": "https://www.jnma.com.np/jnma/index.php/jnma/article/download/659/1372", "oa_status": "HYBRID", "pdf_src": "CiteSeerX", "pdf_hash": "1140abb43fc3e87acca73ae6ee5556c2f9968b07", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
9975000
pes2o/s2orc
v3-fos-license
Scoping the Impact of Changes in Population Age-Structure on the Future Burden of Foodborne Disease in The Netherlands, 2020–2060 A demographic shift towards a larger proportion of elderly in the Dutch population in the coming decades might change foodborne disease incidence and mortality. In the current study we focused on the age-specific changes in the occurrence of foodborne pathogens by combining age-specific demographic forecasts for 10-year periods between 2020 and 2060 with current age-specific infection probabilities for Campylobacter spp., non-typhoidal Salmonella, hepatitis A virus, acquired Toxoplasma gondii and Listeria monocytogenes. Disease incidence rates for the former three pathogens were estimated to change marginally, because increases and decreases in specific age groups cancelled out over all ages. Estimated incidence of reported cases per 100,000 for 2060 mounted to 12 (Salmonella), 51 (Campylobacter), 1.1 (hepatitis A virus) and 2.1 (Toxoplasma). For L. monocytogenes, incidence increased by 45% from 0.41 per 100,000 in 2011 to 0.60 per 100,000. Estimated mortality rates increased two-fold for Salmonella and Campylobacter to 0.5 and 0.7 per 100,000, and increased by 25% for Listeria from 0.06 to 0.08. This straightforward scoping effort does not suggest major changes in incidence and mortality for these food borne pathogens based on changes in de population age-structure as independent factor. Other factors, such as changes in health care systems, social clustering and food processing and preparation, could not be included in the estimates. Introduction Demographic forecasts show that, as in many industrialized countries, the population in The Netherlands is aging [1]. This demographic shift may have consequences for morbidity and mortality caused by infectious diseases. Aging of humans is associated with, amongst others, changes to the innate and adaptive immune system, a process referred to as immunosenescence [2]. Although the number of immune cells does not decrease, functional alterations in some of the associated cell types occur [3]. These alterations likely cause changes in the defense mechanism to pathogens in the elderly, either due to an increased probability of infection and/or an increased probability to disease following infection. Another mechanism related to aging that could play a role in increased susceptibility of the elderly to infectious diseases is an alteration in the gastric hydrochloric acid secretion [4]. As gastric acid is the first in a series of defense mechanism against gastrointestinal pathogens, pH disruption likely leads to an increased probability of pathogen survival and transfer to the intestines. This effect on disease incidence is indirectly shown by the increased risk for gastrointestinal disease due to proton-pump-inhibitor use [5]. Looking at epidemiological data, specific foodborne infectious diseases are indeed observed more frequently in the elderly than in other age groups, such as Listeria meningitis [6]. Changes in the proportion of elderly might therefore result, amongst others, in an increase in listeriosis cases. The effects of the demographic shift may also occur in the very young. A yet underdeveloped immune system, lack of acquired immunity or alternative behavior leading to different exposure to pathogens puts the very young at a higher risk for infection and/or disease. Pathogens such as Salmonella spp. and Campylobacter spp. indeed show the highest incidence rates among those aged 0-4 [7]. A decrease in the proportion of young individuals therefore might contribute substantially to a lower overall incidence for these bacteria. A demographic shift towards a larger proportion of elderly people in the population and a smaller proportion of the young might thus have counteracting effects on infectious disease incidence at the population level. The aim of the current study was therefore to assess the effect of the demographic shift regarding age as independent factor on the incidence and excess mortality of foodborne diseases. The study focused on infectious diseases caused by five foodborne pathogens previously identified as causing either a high population and/or a high individual disease burden: Campylobacter spp., non-typhoidal Salmonella spp., Listeria monocytogenes, hepatitis A virus and acquired infections with Toxoplasma gondii. Perinatal T. gondii episodes were not considered. Population Data Data on the current and expected future age-structure of the population for 2011 and per decade from 2020 to 2060 were obtained from Statistics Netherlands [1], based on the expected fertility of women, immigration rate, emigration rate, and anticipated birth and death rates. Data were grouped according to age: 0 years, 1-4, 5-9, (5-year classes), ≥90. Incidence Estimation The incidence and excess mortality are based on reported cases and not corrected for underreporting to population incidences. Future incidences of disease as function of the age-structure of the population were estimated from age-specific incidence rates per pathogen, assuming these rates remain constant until 2060. For Campylobacter spp. and Salmonella spp. these probabilities were calculated from the age of cases reported to RIVM through laboratory surveillance in 2011 [7]. Hepatitis A is a notifiable disease in The Netherlands and cases are reported to the Municipal Health Service and subsequently entered in a national database. The ages of reported cases were obtained from this database for the years 2007 through 2011. Ages of cases for L. monocytogenes were obtained through active surveillance in The Netherlands [8] from 2005 through 2011. As case numbers for hepatitis A and acquired listeriosis are relatively small, data for the study years were pooled per pathogen to more robustly estimate the disease risk for reported cases per age class. For Toxoplasma gondii we used the seroconversion incidence approach as used by Havelaar et al. [9], who described the mean seroconversion rate to be 0.85% per year from approximately 20 years to 65 years of age based on data from Hofhuis et al. [10] for 2006 and 2007. The probability of infection in the other age classes was set to "0". The age-specific incidences were multiplied with the number of individuals per age class per study-year to obtain the expected number of cases for that age-class. These were subsequently summed over all age-classes for each study year and expressed as incidence per 100,000 population for comparison. Excess Mortality Estimation Excess mortality is defined in this study as the mortality that occurs additionally within 365 days after onset of the disease. Helms et al. [11] estimated the odds ratio for death due to laboratory confirmed salmonellosis and campylobacteriosis in regular surveillance to be 2.85 and 1.86, respectively. The approximate (age-independent) excess probability of death was calculated as (OR-1), amounting to 1.85 and 0.86, respectively. These numbers were multiplied with the average probability of death per 5-year age class according to Statistics Netherlands [1] and subsequently multiplied with the estimated number of reported cases per age class to obtain the estimated number of deaths due to foodborne infections [9]. Fatal listeriosis cases are reported to RIVM as part of the surveillance [8] and the age-specific risk of a fatal episode of listeriosis were obtained from these data. As the incidence of fatal acquired listeriosis is low, data from 2005-2011 were pooled to more robustly estimate the age-distribution of fatal cases. This age-distribution was subsequently applied to the average reported case numbers for 2005-2011 and divided by the population size per age-class for 2011 to obtain the estimated mortality rate per class. Access mortality for T. gondii was not included in the current analysis, analogous to the approach of Havelaar et al. [9]. Ages for fatal HAV-cases were not available and could therefore not be considered. Demographic Changes The percentage of people ≥65 years increases from 16% in 2011 to 26% in 2040 and subsequently declines marginally to 25% in 2060 ( Figure 1). The proportion of people aged 45 to 65 is expected to decrease more than those younger than 45. As the demographic shift is estimated to reach a new equilibrium around 2040, a particular emphasis in further descriptions of the results will be on 2040. Incidence Estimation The age-specific probabilities of infection per year and pathogen are listed in Table 1. Relating these probabilities to the number of individuals per age class showed marginal overall changes in incidence for Campylobacter spp., Salmonella spp. and HAV (Figure 2). Larger changes per age-class were observed, but these cancelled out at population level. For Salmonella spp. and Campylobacter spp., for instance, the proportion of cases among those aged ≥65 increased by ~10% in 2040 but concurrently decreases with the same amount among the younger age classes. The largest decrease in the proportion was among those aged 0 to 20 for Salmonella spp. and those aged 45 to 65 for Campylobacter spp. (both −5%). The proportion of HAV cases among those aged ≥65 increased by 3%, and was associated with a decrease among the younger ages, especially those aged 45 to 65 (−2.2%). The estimated incidence of listeriosis increased by 45% from 0.41 per 100,000 in 2011 to 0.60 per 100,000 in 2040. The proportion of cases increased among those aged ≥65 by 15%, and was countered predominantly by a decrease among those aged 45 to 65 (−10%). The estimated incidence of toxoplasmosis increased among those aged 20 to 29 and 60 to 65 by 7% and 20% respectively, but decreased among those aged 30 to 65 and lead to the overall decrease of 10% by 2040. These results indicate that the change in age-structure of the population, as independent factor, does not lead to large changes in the overall incidence. The shifts in age-structure were predicted to cause an increase in some age-groups (older) while a decrease was predicted for other age-groups. Over all ages, these differences cancel each other out. This approach was however intended as scoping effort, with focus on a change in a single factor (i.e., the age-structure of the population). Aspects that were not considered in this analysis are for instance a change in the age-specific probabilities of disease. A change in health care availability, accessibility and performance might change these probabilities considerably. Prescribing proton-pump inhibitors, as is done particularly to the elderly, is likely related to greater susceptibility to gastrointestinal disease [5,12]. Changes in the use of such medication in future years can change the age-specific probability of infection compared to the current baseline for 2011. Another aspect that is not considered in the current study is a dynamic change in immunity. Infection risks for HAV, for instance, have declined steadily in the past decades due to increased vaccine-induced protection and increased socio-economic and sanitation conditions [13,14]. McDonald et al. [15] predicted that the HAV seroprevalence declines in The Netherlands by 2030, leaving a larger proportion of the population susceptible to infection. In that study, scenario-based predictions on disease burden for HAV showed a three-fold increase in 2030 when assuming a constant infection probability, as was done in the current study, whereas a further decreasing infection risk at the rate observed in past decades was estimated to decrease the HAV disease burden five-fold [15]. The difference between results of that first scenario and our results, both considering a constant incidence, might be caused by the different measure estimated. Where our estimates considered incidence, McDonald et al. [15] additionally considered the severity of infections by estimating disability adjusted life years. An estimated increase in incidence among the elderly combined with an increased risk for severe complications due to hepatitis A in the elderly compared to younger age classes caused the DALY estimate to increase. Like for HAV, a decreasing trend in prevalence of Toxoplasma gondii IgG is observed in the Netherlands from 46% in 1987 to 35% in 1995/1996 and 19% in 2007/2008 among women of reproductive age [10]. This decrease may lead to increased susceptibility to infection when exposed in the coming decades. The exposure may however also decrease. The observed seroprevalence decrease in the past 20 years has been attributed to changes in farming systems and increased consumption of frozen meat by consumers [16]. We can therefore not assess whether our assumption of constant infection probability lead to an over-or an underestimation of the future disease risks due to Toxoplasma gondii. Also, the implementation of effective intervention measures to reduce exposure of the population to a pathogen may decrease the age-specific probabilities. Depending on the targeted population these differences can affect particular age groups or the entire population. The current study thus assumes a simplistic situation in which all aspects of society remain unchanged except for the age-related structure of the population. The current incidence estimates therefore do not bear predictive values, but are useful in understanding the expected effect of population ageing as independent factor on foodborne pathogens. Excess Mortality Estimation Excess mortality peaked in 2060 and the rate approximately doubled from 0.21 to 0.46 per 100,000 population for Salmonella spp. and from 0.36 to 0.7 per 100,000 population for Campylobacter spp. (Figure 3). The excess mortality was observed among the people aged ≥65 and remained approximately constant in younger age groups for both pathogens. The estimated excess mortality rate due to L. monocytogenes increased by 25% from 0.061 to 0.078 per 100,000 population. This increase was found among those aged 65 and over. The rate among younger individuals was estimated to remain approximately unchanged. Similar to the future incidence estimation, the excess mortality estimates consider current conditions to remain unchanged until 2060. A change in health care availability, accessibility and performance might change the excess probabilities of death. Improved treatment of severe cases can increase the likelihood for recovery and hence lower the excess probability of death. Such future changes are obviously not available for inclusion in such estimation, but need to be considered when interpreting the results. Conclusions The average age of the Dutch population is predicted to increase until 2060, with a peak of the proportion elderly aged ≥65 in 2040. Estimated disease incidence rates for Campylobacter spp., Salmonella spp., hepatitis A virus and Toxoplasma gondii nevertheless will change marginally until 2060, because increases and decreases in specific age groups cancel out over all ages. For L. monocytogenes, incidence estimates will increase from 0.4 per 100,000 in 2011 to 0.6 per 100,000 in 2040. Estimated mortality rates over all ages will increase two-fold for Salmonella and Campylobacter, and increase by 25% for Listeria. This straightforward scoping effort does not suggest major changes in incidence of foodborne pathogens based on demographic changes in age-structure as independent factor. Other factors, such as changing policies, changes in social clustering of the elderly or changing healthcare systems have not been considered in these estimates and may be more influential. However, prognoses for such factors are not currently available.
2014-10-01T00:00:00.000Z
2013-07-01T00:00:00.000
{ "year": 2013, "sha1": "5986009da908384cc5e1d00ad50b166a334422ea", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/10/7/2888/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5986009da908384cc5e1d00ad50b166a334422ea", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
142964001
pes2o/s2orc
v3-fos-license
Communities and sustainability in medieval and early modern Aragon , 1200 – 1600 This paper examines the case of sheep raising in Aragon from the 13th to the 17th century to explore the political dynamics and social criteria that rural communities used to manage their common land, and their role in larger economic and political frameworks. In the line of recent historiography about the commons, the research connects the strength of rural communities, institutional arrangements governing access to natural resources, and environmental efficiency. The hypothesis is that the “social reproduction” of the community was the aim that defined the collective action of strong and horizontal communities. They preserved their natural resources and defended large swathes of common land from foreigners. However, when these communities acted in a more complex system of transhumance within the framework of poorly articulated kingdoms, they would tend to predate others’ resources and keep others’ commons open to their free access. The outcome was the existence of large, but very different, and contested, kinds of commons. Introduction This paper aims to join the present debate about common land in Europe.Lately, historiography on the topic seems to be moving away from Hardin's dated ideas that common goods were doomed to be over-exploited and inefficient (Hardin 1968), after Ostrom's turning point (Ostrom 1990).The theoretical framework of the debate is set up by sociologists, political scientists, economic and environmental historians.Most of them, working mainly on late modern and contemporary times and using the paradigm of methodological individualism, are concerned with the Esther Pascua Echegaray way groups of users suppress free-riding attitudes by monitoring behaviour, and implementing institutions and rules for cooperation (de Moor 2002;van Riel and van Zanden 2004).Studies on the quality of public life, civic engagement, and collective action have developed interest in the study of medieval guilds and communes to explain the early formation of "social capital" in north west Europe in the long-run (Putnam 1993).The south of Europe has not been seen as a contributor to the development of civic institutions. My contribution to the debate is to bring in a case study from Southern Europe where we will find strong local civic communities and confraternities and to connect the nature of these communities with their environments using the analytical framework of theories of identity (Goffman 1959;Taylor 1989) and recognition (Honneth 1996;Izquierdo Martín andSánchez León 2001 and2010).I present two case studies from Aragon (northeast Spain) from the 13th to the 17th century with the aim of reflecting on the relationship between the constitutional structure of rural communities and the management of their resources; subsequently, over their interaction with other villages and the formation of different kinds of commonland.The comparison brings together animal husbandry in two different ecosystems, upland and lowland, that traditionally have been studied separately because their rural institutions evolved along distinct lines. 1 We will examine the case of several rural villages on the northern Pyrenean valleys and of an urban brotherhood, a confraternity, the House of the Sheepbreeders of Zaragoza, a town on the terraces of the Ebro river.Both regions were able to sustain specialized wool economies through a medium scale transhumance based on communities that managed collectively their pasture, water, territory and much of their infrastructure.These two regions of Aragon became connected through a sophisticated system of transhumance.In those centuries, the rules, customs and powers within the larger framework of the kingdom were not clearly defined.The privileges of pasture that were held by particular valleys, villages, towns, lay and religious lords produced constant conflicts with the local communities for the use of their land.They caused the organization of different types of commons and an increase in the pressure upon the natural resources. The argument of this article is that, in two different economic, social and ecological niches, the main aim of rural communities was the "social reproduction" of the whole group.As a result, they preserved large swathes of common land, kept foreigners away from them, guaranteed equal political rights to all the members of the village in order to restrain the actions of those better-off, and watched the use of the natural resources.Many of the issues raised in the article have been dealt with in numerous works covering different parts of Europe in the medieval, early modern and modern periods (Vassberg 1984;de Moor et al. 2002, 15-32;Demelas and Vivier 2003), but cases differ from country to country.I want to stress a political argument that connects, in an explicative way, the social nature of these communities with three elements: 1) the participation of members in the political decision-making, 2) the sustainability of their environmental decisionmaking, and 3) the historical origin and nature of different swathes of commons. The article is based on a wide range of sources.Several municipal and valley charters of the Pyrenees have been published by Gómez Valenzuela.Most medieval sources for the sheepbreeders of Zaragoza have been gathered in a volume of documents by Fernández Otal for his PhD thesis (1993).Equally, the article has benefitted greatly from the volume of charters compiled in the unpublished PhD thesis by Faci Lacasta (1985).In addition, I have worked with the Archivo de la Casa de Ganaderos de Zaragoza, mainly on the 16th-and 17thcentury series of "Manifestaciones" and "Repartimientos", and on the "Acts of the Assemblies", a total of 300 documents, as I will clarify below.There are comprehensive works about transhumance in Aragon, but none is concerned with the relationship between this activity, the socio-political nature of communities and the management of their environment. Commonland and the transhumance system in Aragon There was no other region in Europe with the degree of conflict among villages for their pastoral space as Spain in medieval and modern times (Wickham 2007, 44).Unlike conflicts for land that mainly involved specific households and families, disputes for pasture and water threatened the entire community and its municipal territory (Wickham 1985, 437-451).In the northern mountains of Iberia, villages and towns had power over their hinterlands.Cultivation was not always suitable in those regions and the pastoral activities promoted and preserved large tracks of common land.Customarily, the inhabitants of those villages had the right to use the montes, that is the wasteland of the municipal estate, in order to take their flocks and collect firewood and other goods. The various forms of property ownership and its relationship with livestock raising have been studied extensively for Castile mainly as a consequence of the existence of The Mesta (Mangas Navas 1981;Vassberg 1984;Marín Barriguete 1987;Anes Álvarez and García Sanz 1994;Monsalvo Antón 2007).The County, later Kingdom, of Aragón, from the 9th to the 11th century had important similarities in terms of villages' land tenure pattern.The Kingdom was comprised of a collection of communities, mostly inhabited by shepherds, who were well adapted to the harsh ecology of the Pyrenean Mountains.The Aragonese northern upland forms paralleled valleys running north to south following the course of the rivers with mountains (2000-3000 m) covered in snow for eight months of the year.The mountains continue to the south in what is called the Prepyrenees, a lower chain of mountains of 1500 m.This is a difficult environment for cultivation that doomed the inhabitants of the highlands to combine the sowing of small harvests at the bottom of the valleys with a pervasive animal husbandry of pigs, horses, mules, cows, sheep and goats. In 1098, the town of Huesca and, in 1118, the town of Zaragoza were seized from the Muslims, and the vast lowlands of the Ebro terraces were opened to the northern Christians.The settlement of the new region was slow despite the efforts of the King to preserve the Muslim population and to transfer the abandoned property immediately to Christians.The town of Zaragoza is located at the epicentre of the driest region of the river Ebro.2This is flat, stony and infertile land with basic soils of poor organic matter and a strong concentration of salt; large surfaces of gypsum and xerophitic vegetation (Frutos Mejías 1976, 12-36).For the opposite reason than in the highlands, the poverty of the soil and the harsh climate made any complementary uses of the land to animal husbandry impossible, except in particular areas such as the river banks where agricultural production was limited.In the course of the 13th and 14th centuries, confraternities of sheepbreeders (ligallos) were founded in most villages.These were religious, economic, political, cultural, social and convivial brotherhoods.These organizations, as those in the north, protected their territory and common land against foreigners. These two regions are complementary in temperatures and rainfall and located only 200 km apart.By the 13th century, it can be said that there was in Aragon a "double system of horizontal transhumance" that kept the sheep flocks on ideal temperate seasons all the year, benefiting from fresh pasture. 3The highlanders travelled in October from the Pyrenees to the southern plains from Huesca to the Ebro river to spend the winter ("inverse transhumance"); the lowlanders walked in May from the Ebro valley to the northern mountains to spend the summer ("direct transhumance").The system was based on the royal privileges given by the monarchs to monasteries, lords, towns and villages to graze on the realengo or baldío, the royal land, most of the land of the Kingdom.It succeeded in maintaining dynamic economies of scale with a large number of sheep, around 250,000 heads in the Ebro, and the same figure for the Pyrenees and to supply wool for the local and the international market of Catalonia, Southern France and Italy (Sesma Múñoz 1982).In social terms, it consolidated a wealthy group of middle owners (500-2000 sheep) and facilitated a remarkable social and geographical mobility.It also produced large swathes of common land, landscapes of pastures extended to the maximum: to the limits of the irrigated margins of the agriculture in the Ebro river, and to the lower and upper limits of the forests in the Pyrenees. Transhumant owners equally defended large and open commons across the kingdom against the tendency of local villages to enclose the best pasture for the use of draught animals of the community.However, unlike what was happening at local level, this kind of common land created endless conflicts.Two systems overlapped and confronted each other in the Kingdom of Aragon: on the one hand, a large collection of villages and towns, almighty upon their territories, which they kept forbidden to foreigners; on the other hand, basically the same entities with different privileges to move their animals around the Kingdom, claiming to access others' pasture.The relationships between local communities were regulated by local customs and institutions.The problem broke out on a large scale where norms, relations, uses and customs were not established and coherent. In the Kingdom of Aragon, commons were everywhere, but they were dissimilar and were the outcome of different processes and different communities. The highland of the Pyrenees To understand the nature of the rural communities in Aragon, their livestock raising systems and their relationship with the natural resources, we need to look at the villages and hamlets of the northern mountains during the Early Middle Ages.Those villages were scattered in the valleys and settled on what was loosely defined as royal land (realengo).They controlled a territory of their own and held meetings where the elders of the households made decisions about common matters. The cartularies of the first monasteries settled in the region and the early charters of the 9th century suggest the contemporaries identified several kinds of landscapes.In the donation of the Count Galindo Aznar, in 867, to the monastery of San Pedro de Siresa of the estiva de Alarate, he used four terms to qualify the landscapes in the high valley of Aiguas Tuertas (at the top of the Ansó Valley): the forest, the mountains, the fields of the villages, and the estivas on the mountain passes (puertos).Despite the loose and brief mention, we know the estivas were scattered areas in the mountains used for grazing and stocking the animals during the summer (from the Latin aestivus, meaning summer). 4In other documents, two other common words appear: the pardinas and aborrales, assartings that opened at the lower part of the hills and on the southern faces where pens and folds could accommodate the animals during the cold winter.They concentrated on the Pre-pyrenees. 5Finally, there was the most crucial space for the rural villages, the 4 Item dono et donando affirmo prefato monasterio totam vallem que est de illa intrata de Aguatorta in iuso, silvas, montes, campos; Subach cum suis campis, Oza similiter cum suis campis; et estivam que vocatur Agnedera, et suos agorrals;et estivam que dicitur Aguar;et unum cubilare in Aguatorta, et alium in Garinza. (Ubieto Arteta 1960, D.4, 19). 5 Pardinas could be common land, but more frequently they were subject to private usufruct or property.The documents of the monastery of San Juan de la Peña offer several examples of donations of estivas and pardinas: Sancho el Mayor at the beginning of the 11th century bestowed the estiva de Lecherín: illam estivam que dicitur Liserin (Ubieto Arteta 1962-63, vol. I, D.56, 165-169), Ramiro I gave the pardina of Pastoriza (Ibid,D.94,(73)(74)(75).A private donor gave away the pardina of Buil with its livestock in 1055 (Ibid,D.118,(117)(118)(119).The pardina was a compound of pastures and boalar, an enclosure of the best pasture reserved for the use of the draught animals of the community. 6In this landscape the crops, located at the bottom of the valley, turned into the forest and the forest into the pastureland at the top of the mountain, all peppered with pens, huts, paths and isolated clearances (artigas) either for the use of livestock or for enclosed fields.The pardinas and estivas were the two characteristic landscapes among which the flocks moved. This evidence suggests that land tenure was in the hands of these communities who managed their flocks following a system of "vertical transhumance" from the hamlet to the high passes that reign on both sides of the Pyrenees.It is important to notice that the circuit did not develop within the boundaries of the village, but on the valley territory.These villages belonged to a valley (val), an ecological and cultural unit, and they held meetings of the villages of the valley (Juntas de Valles). 7The system leaned on the traditional custom that all the residents of a village could take their animals to the commons of its coterminous villages.They expressed this right as the power to take their animals "from threshing floor to threshing floor, and from sun to sun", the part of the montes that they could reach walking during the day with the obligation to leave by night.Later legal codes called this institution the alera foral (de area foralis; Fairén y Guillén 1951).It defined large tracks of the wasteland of coterminous municipal estates called ademprivios, land collectively used and a major support of villages' economy.The alera foral was based on a principle of strict reciprocity amongst local communities and the defence of their commons. At the end of the 11th century, and bearing in mind that the larger evidence produced by monasteries can misguide us, a change might have happened in the nature of transhumance.The early Benedictine monasteries such as San Andrés de Fanlo, San Pedro de Siresa, Montearagón and San Juan de la Peña became big collectors of pardinas in the Prepyrenees during this century.Monasteries were aiming to stock their animals on the warmer faces of the lower altitudes of southern mountains.This would imply the existence of a kind of early horizontal transhumance of animals from the central Pyrenees to the external chains, precisely at the very moment that sheep became the most frequent animals mentioned in the documents. We know nothing of the dynamic and management of the natural resources of these communities before the late Middle Ages, but the characteristics might not conveniently placed folds, as in 1050: pardina qui est in monte... Et in omnes montes de Quarnas suas erbas et suos cubilares (Durán Gudiol 1965, D.17).Seldom associated with churches: pardina de Aquabiela cum ecclesia sua et montibus totum... ab aqua de fonte usque ad erba de monte.(Lacarra 1982, D.22, 36).The municipal fuero of the village of Alquézar explains in 1069 that residents should render the tithe for what they produced on the passes and no other rent: et in nullo loco ubi laboraveritis de illos portos in iuso non detis nisi decimam ad Deum.(Lacarra 1982, D.2, 10). have changed a great deal from what emerged in the documents of the 15th century.By then, rural communities were highly territorialized and they had a high level of competence on their hinterland.The municipal government regulated: the date of the opening of their municipal land to foreigners (usually 3 or 5 May), the date for ascending to the puertos (usually 10 June), they also organized the gradual climbing of the lambs followed by their mothers, the timetable to open and close the boalar, the restoration of folds, pens and buildings to milk the sheep and prepare the cheese, the number of animals per house, usually allocated in lots of 400 sheep (malladas), the type of animals entering the puertos, the shepherds that would lead the flocks, the counting of the animals, the plots that would be farmed out to foreigners, the prices of the pastures, the closing date of the puertos and the territory (usually by St Michaels), the exploitation of the forest, the protocol for sick animals and epidemics, and their arrangements with other valleys either in Aragon or France. 8The system was designed to avoid the introduction of foreign flocks, and to ensure that all the dwellers, both rich and poor, had access to the common upland, if in different proportions. In economic terms, the system succeeded in launching economies of scale based on a highly commercialized sheep husbandry with three consequences: it sustained large numbers of sheep, about 200,000 heads, articulated medium circuits of transhumance from the Pyrenees to the plains of Huesca, and as far as the Ebro valley (200 km), and large portions of the pasture was hired to foreigners, bringing an unknown wealth to the mountains. Ethnographical studies on 19th-century animal husbandry had qualified these practices as "highly efficient ecologically" (Pallaruelo Campo 1988).Households defended the right of the residents to the municipal resources and promoted large tracks of collective commons under strict regulations showing that common land was not an open territory to the predation of all (Netting 1981;Ostrom 1990;Iriarte Goñi and Lana Berasain 2007, 207-208).Territories were completely defined, appropriated, classified, and organized in the collective imaginary and daily routines of the communities.A diversified landscape in which other activities conflated -basically timber and iron industries. In social terms, the system favoured the existence of multiple households which based their income on the raising of medium flocks of 500 heads which moved in circuits of 200 km, leaving their villages for around eight months.These were communities with an increasing degree of social mobility because they came into contact with larger towns and businesses.These communities resisted the pressure of the sheepbreeders arriving from Zaragoza, according to the first 14th-century records.Every summer, around 70,000-100,000 sheep moved up to the Pyrenean mountains, a real invasion not always observing the local customs, practices and enclosures.Sometimes, the locals did not provide food or accommodation to the foreigners; others did not recognize their privileges arguing that they could not understand Latin, that they could not keep copies of the accusations made against them, or that the written letter was not recognized in the valley.Every summer, villages ignored the royal privileges, the settlement and agreements of the previous year and they attacked, killed, robbed, smuggled the animals and murdered the shepherds (Canellas 1988, D.28, 35, 58, 63, 69, 90, 92, and 156). Paradoxically, the main problem was that the lowlanders did not respect the enclosed land either for agriculture or for the exclusive use of the grazing of the local animals.Interestingly enough, the people from Zaragoza wanted to dismantle the fenced land.They were in favour of large common land but without regulations.In the long-run, the sheepbreeders of Zaragoza had to come to terms with it, and they paid their summer grazing individually to the municipalities whose mountains they hired.Despite the conflicts, the arrival of the people from Zaragoza implied an enormous input of wealth to these communities, an element of complexity in their social profile and a qualitative development in the administration and management of the territory of these villages. The pillar of that social order and kind of sustainable management is expressed in the concept of vecino (resident, 'neighbour' literally in Spanish), as it has been proved for other regions of Europe, starting with Castile.Despite the complex universe of overlapping jurisdictions and traditions, the village assembly of the vecinos (sometimes called boni homines as well), the heads of the households of most rural communities in Aragon, were ultimately responsible for the management of their municipal estates.Every household in a settlement, no matter if it was a multi-generational household, was one vecino with equal political rights.It is this political element that I want to stress.Access to the common land, to the wasteland or to the enclosed land did not happen by simply buying land, but becoming a member of the community, a vecino.It was not an economic but a political route.The village controlled the pressure upon the natural resources by regulating who belonged to the group (Netting 1981, 60).Families and individuals' economic and social position were identified by their integration within the community (Izquierdo Martín 2007, 66).Hence, the concept of vecino and vecindad was the central source for legitimacy to exist within the community.It turned out to be a mechanism of economic equilibrium, as the institutional set-up favoured tendencies towards socio-economic levelling, which is different to affirming that their members were equal in economic terms.It was also a mechanism of ecologic equilibrium that empowered communities to keep a balance between population and resources (Netting 1981, 12-16;Rosenberg 1988, 18). In this context, there is an exceptional document that illustrates the principles that informed the relationship of these communities of shepherds with their natural resources.A dispute with arbiters, settled in 1632, defined the number of animals that the vecinos of Tramascastilla, Sandiniés y Escarrilla, the three main villages of sheep owners of the Tena valley, could take to their puertos.It argues that the number of heads brought to the puertos should be fixed and not changed in the future: "considering the size of the puertos and the grasses, the animals they can sustain, and since the territory is always the same, the sheep cannot outnumber the municipal estate and if some sheep owners increase their animals, others shrink, as we learn from experience". 9hese arbiters and communities knew what "livestock carrying capacity" meant: the necessity to estimate the ideal number of animals per hectares of pasture in order to establish a sustainable system of grazing for a period of time.We are presented here with a theory of the ecologic conservation as a factor of the community preservation.The concept of the vecino was at the core of the management of the environment since all the residents should have access to the natural resources.As a consequence, the sentence established the same number of animals per family in each of the three villages.The argument that informed the decision of these communities was the experience that if some residents had lots of animals on the commons, others would have but few.The social argument underlines that, with all factors being equal, wealth accumulation is a zero-sum game when natural resources are taken into account.The sentence set up a limit of 800 heads per household in order to favour the poor as much as the better-off villagers. 10or Northern Pyrenean communities, their territory was part of their social identity and they managed the main part of their municipal estate as commons.They could sustain a specialized economy and respect the reproductive capacity of the natural resources thanks to the political participation of the community in the regulation of the resources on equal bases.These created a kind of "environmental criterion" whose objective was the "social reproduction" of the community and preventing the access of the foreigners to them. 11The population was not static over this long period, but it was not an independent variable either.It depended on the social coherence and political strength of the community as such to define its future, which explains the slow fluctuation of demographic figures.For a long while, the criteria of these communities, the concept of vecino and the universe of rights associated with it, made strong socio-economic diversification difficult and mitigated the abuse of the natural resources at the expenses of keeping the economic profile of the communities low.The breaking down of the identity of communities produced a divergence of economic interest and eventually the collapse of the demography and wealth of the Pyrenean mountains after the 17th century.There is no doubt that sustainability correlates with specific political forms and social aims. The lowland of the Ebro valley The progress of the Christian conquest to the south brought to the new lands the pattern of settlement, land tenure, economic activities and social organization of the northern communities.In the Aragonese frontier region, sheepbreeders formed associations called ligallos.Their main aim was the defence of the associates in all the issues related to their supra-local livestock activities, mainly against attacks, bandits, and rustlers, the return of the stray sheep to their owners and the maintenance of the sheep tracks.However, in order to understand the nature and decisions of these communities, it is important to take into account their social dimension.The ligallos helped the widows, the orphans, the sick and old members of the Brotherhood, they funded beds in hospitals and chapels in the local churches, lent money to their associates, allowed instalments for their debts, mediated in conflicts and shared out the cost of the legal defence and damages of members of the association (Faci 1985, vol. II, D.262, 272, 276;Fernández Otal 1993, 60-63).The bonds amongst the brothers were reinforced in social gatherings such as banquets or processions (Greif et al. 1994, 745-776;Greif 2010).In all these occasions, ostentation of the better-off members was regulated to create the illusion of economic homogeneity within the group.They had their own religious identity as patrons, churches and chapels and developed activities for the improvement of the town (Fernández Otal 2004, 65-67).The most powerful and privileged ligallo in Aragon was the one of the royal borough of Zaragoza. The town was given a generous municipal law by King Alfonso I in 1129 in order to facilitate the settlement of the Christian population.In 1138, Count-King Ramón Berenguer IV defined a large municipal territory of 140,000 hectares (Canellas 1988, D.1, 47-49;D.7, 55).The town was located at the crossing of four rivers, Ebro, Jalón, Huerva y Gállego, with an impressive irrigated huerta (vegetable gardens).The orchards were surrounded by large areas of sterile wasteland and by four calcareous plateaux with an altitude of 500-800 m, which could only be used for the roaming of sheep and goats.In the first written mention of the shepherds of Zaragoza, in 1218, they were granted the right to exert criminal jurisdiction (Canellas 1988, D.4, 52-53).In 1229, King James I took under his protection the Confraternity of Saint Simon and Saint Judas, later known as the House of the Sheepbreeders of Zaragoza (La Casa de Ganaderos de Zaragoza) (Canellas 1988, D.5, 53-54). Since then a series of royal charters made Zaragoza the most privileged institution in the Kingdom in terms of access to pastures.In 1233, the king forbade all the communities of the river Ebro to enclose boalares that could obstruct the free roaming of the sheep of Zaragoza (Canellas 1988, D.6, 54-55).That meant condemning these communities to starve or to fight back.It also meant the constitution of large commons without compensations for the coterminous communities.In 1235, the inhabitants of the city received a royal privilege of universal right of pasture (pastura universal) and in 1391, the right to exert civil jurisdiction (Canellas 1988, D.6 and D.125, 55, 328-333). From the 14th to the 16th centuries, Zaragoza had conflicts for pasture, water and the ademprivios not only with the populations within its jurisdictional term and with all the lordly villages on its boundaries but also with the main summer grazing mountains: the Pyrenees, the Teruel mountains in the south, and the Moncayo mountains in the east.Communities in these three areas resisted, with differing success, the pressure of Zaragoza, despite royal charters tried to curb the spirit of the highlanders. 12he documentation of the House of the Sheepbreeders is rich in references to conflicts.Fernández Otal has worked on the documents of the last two decades of the 15th century showing that conflicts had a seasonal pattern, with winter being the critical moment of disputes against the local communities of the Ebro river and summer against the villages of the mountains. 13Around 1459, the House of the Sheepbreeders was given a substantial concession from the Council: half of the municipal territory south of the river Ebro, 60,000 out of the 140,000 hectares of the municipal jurisdiction (40% of the total land) for a low sum.The Dehesa de la Casa de Ganaderos de Zaragoza would be an endless problem in the relationship between the two institutions as they never did agree on the nature of the transaction, nor did they on the price to pay.The council pretended they had leased what was a municipally owned property (bienes de propios) and should pay.The associates claimed they had leased part of the common land and, as all the residents of Zaragoza could be members of the Confraternity, they had free access to it.This identification of the status of vecino (resident) and cofrade (associate) brought problems in an urban and diversified economy where the interests of the town were not always in line with those of the shepherds. At the start of the 16th century, documents throw light on the internal working of the House, its relationship with the territory and the management of the dehesa de Ganaderos.There are four types of documents: regulations (ordinaciones) of the House, the Acts of the Assemblies or General Chapters (4 annual regular meetings plus some extraordinary ones; Actos Comunes), the annual declarations of sheep per owner (Manifestaciones por el pago de la Dehesa), and the annual allocation of sheep per field (Repartimientos).We learn that this was an association of urban medium sheep-owners (wealthy peasants, artisans, municipal officials, merchants) of 1000-2000 heads which employed local shepherds or shepherds from the Pyrenees.14There were some prominent families within the organization from the surrounding villages and from the oligarchy of Zaragoza, but they did not last beyond the third generation, shaping an internal language and discourse in their meetings that stressed that the common welfare and the corporation was based on the interests of small and medium sheepbreeders (Sánchez León 2007, 341).This confraternity was not an instrument of the town oligarchy or nobility.Lords did not enjoy special privileges and, as the rest of sheep owners, entered only if they lived in a house in town and if the General Chapter approved them.The House developed a lot of homeostatic mechanisms to safeguard the rights of all the members.There were not a minimum number of animals required to enter the House, nor to occupy any of the offices except the highest (Justicia de Ganaderos).Elections were secret in the general assemblies and each person had one vote.Grievance and disputes were to be solved within the community, no member could bid for others' pastures, plots for grazing on the Dehesa were allocated annually by lottery to prevent corruption and the pasture could not be sold.The political control of the price of the grazing precluded the creation of a market on grasses, the action of the wealthier sheepbreeders, and prevented the small owners from speculating with their lot.It was a main mechanism for the social reproduction of the community. As in the Pyrenees, the Acts of the Assemblies of the 16th century show that the House thoroughly regulated the rhythm of activities and the number of animals in the Dehesa.There was an annual cycle starting in September when the sheepbreeders declared the number of pregnant sheep in order to be allocated to pasture in the Dehesa.The House estimated the total payment due to the Council for the Dehesa, organized the counting of the flocks and the allocation of the 42 acampos (each field was shared by two or more sheep owners).By 30 November, they opened the Dehesa; by 10-15 March they closed it.From that date on until 1 June, sheep flocks could only cross the dehesa to start their transhumance, while strays were returned to their owners.Only the animals culled for the market by St. John's day remained on the grounds.During the summer, the officials inspected water holes, marking posts, lambing sheds and paths (Fernández Otal 1993b;Pascua Echegaray 2007).The Dehesa of Zaragoza was the response to the need to secure the pasture to a growing herd that stayed from October to April in the Ebro valley.This became a specialized landscape that excluded any alternative use, and imposed a regime of intensive but seasonal grazing upon a fragile environment. It is difficult to find out the criteria for the allocation of the dehesa resources, but some of the practices and decisions point to a concern for the proper reproduction of the pasture.As we have already seen, the House banned the entrance of foreigners in the organization and obliged the owners to share a field.Sharing and the right of way prevented family appropriation of the plots, created co-responsibility on the exploitation and a form of mutual surveillance.It seems that animals were not regarded as more important than the pastures, because the Brotherhood never changed the date of opening the dehesa or the cycle of exploitation of the land.Not even in 1546, when due to the sickness of the animals the sheepbreeders asked for permission to enter it on 1 March.The House defined a special apportion of land to keep the ill cattle in quarantine, but did not consent to change the schedule. 15They were concerned with the efficient exploitation of the grazing grounds, as when in 1534, the House obliged the animals to be taken to the Dehesa before the first of January so as not to waste the good grass.The general meetings of the House always voted in favour of allocating the pasture of flocks by fields, rather than entering the dehesa freely, as the best way to control the exploitation of the grass. 16As a consequence, their policy was always to renew the lease of the dehesa to the Council of Zaragoza 17 and to award the fields by lottery. 18There are also hints that there was some kind of control over the number of members and animals in the organization. 19 The House used an accurate system to calculate the ratio heads/land in its Dehesa.The Repartimientos are documents that, from the 16th century on, specified owners by name and surname, the animals they could bring to the field allocated by lottery on the Dehesa and the sum they had paid for them.Unlike in the north or in central Castile, the Dehesa was not alloted to the vecinos free 15 la defessa se solia soltar a quinze de Março cada año y que habia mucho ganado enfermo de piqueta que les parecia si se estaria dicha defesa por soltar por todo el mes de Março o si se soltaria como era costumbre... y que para los ganados enfermos se nombrase y diputasse una partida por el dicho señor justicia donde fuesen a paxentar y bever porque no peguassen el mal a los ganados sanos.(Faci 1985, II, D.250, 486-488). 16On 23 May 1526, the General Assembly decided that the animals of the brothers would graze on the Dehesa both Garrapinillos and Alcantarillas, by fields and depending on the size of their flocks (paciesse por acampadero dando a cada un confrayre el acampo según la porcion del ganado que tiene, Faci, 1985, D.240, 443-450).In 1549, they unanimously voted to divide in fields (Faci 1985, II, D.269, 571-573). 17si se recibiria la deffesa con la capitulacion que la arriendan los jurados que viessen y votassen sobre ello y assi por la mayor parte del dicho capitol fue votado y determinado que se tomasse dicha deffesa (Faci 1985, II, D.270, 574-577). 18partir por suertes (Faci 1985, II, D.278, 607-608). 19Members were growing constantly from the 13th to the 16th century (from 20 to 40 families in the 14th century, 40-80 in the 15th), but they fluctuate within a range afterwards (100-150) (Fernández Otal 1993a, 260).The number of animals fluctuated following a similar pattern: first half of the 16th century, between 70,000 and 100,000; from 1570 to 1606 the figure surpassed the 100,000 heads with two deep drops; in the 17th century, 130,000 heads with a summit of 200,000 by 1635-40 (based on the Manifestaciones, Pascua Echegaray forthcoming). of charge, as common land was customarily (Vassberg 1984, 52-53).They all paid.In 1535, the House established that the partition should be done in lots of 1000 sheep.This means that fields were of a similar size, that small owners had to bring together their flocks to meet the number, and that large flocks could not monopolise the pastures.If on the total 60,000 hectares of the Dehesa, there were 42 fields, each field was about 1400 hectares.Considering the Mediterranean semiarid ecology of the region, ideally the Dehesa should not stock more than 60,000 heads, this is a carrying capacity of 0'7-1 sheep/ha (Vera y Vega 1986, 177-199).However, the mean of animals allocated to the Dehesa was kept at 70,000 heads which is a pressure on the pastures of 1'19 sheep/ha.Considering that sheep in those centuries were smaller in size and they ate less, the number is appropriate and indeed better than the Right of Possession as defined in Castile (1'33 sheep/ha in the summer plains of southern Spain).The pressure over the Dehesa seems unsustainable in two periods of the 17th century: 1610-1640 and 1660-1680, unless we bear in mind the increase in the rainfall and drop in temperature due to the Little Ice Age that affected the northeast of Spain in that century (Saz Sánchez 2003, 39-64 and 111-136).For the rest, it fluctuated just above the ideal numbers.The peak moment of the allocation of animals to the Dehesa correlates with the higher number of animals manifested, showing that they prioritized the community to the natural resources.20However, probably here comes the transhumance as a major compensation for the large number of animals on the grasses of the municipal terms.The four-month absence resulted in a low livestock long-term capacity in total, allowing the grazing grounds to recover to a minimum. In Zaragoza, as in the Pyrenees, the survival of the entire group of shepherds was at the centre of the equation, dictating that water and pasture were to be managed fairly for them all.We need to look again to the first half of the 17th century, as in the Pyrenees, to a specific and representative conflict around the Dehesa that discloses some features of their criteria.In this century, they opened the harsh debate around the sale or leasing of the annual right of the pasture. The vast fluctuations of sheep numbers in the first half of the 17th century with the growth of 1610 and 1630-40 and the sheer drop of 1641 and 1650 opened then a probably old and long debate about the selling of the grazing lots that each sheepbreeder received in the lottery of the acampos.The struggle started with the economic boom of 1626 as it is mentioned in the minutes of the General Chapter of 29 June 1630, when allegedly some of the members manoeuvred to force the statutes to be reviewed in order to allow all kinds of abuses, mainly the selling of the grasses allocated by lot to owners who did not bring animals to graze. 21In a tempestuous and well-attended General Chapter, the House disclosed three types of abuses: those who changed their fields and left the worst plots to return to the House and entering a second lottery; those who sold their herbs at higher prices than those permitted; and those who leased their fields to foreigners. 22 In the name of the common good, they declared that the main objective of the House was to preserve the pastures for the community, no matter the status of the owner of the flock.They established that those who did not have animals to graze should return their acampo to the House for the subsequent allocation by lottery to those who would need it. 23 As one might suspect, there were strong factions within the House.Those in favour of the free selling of the grazing plots fought back in a badly attended and manipulated general meeting on 28 December 1631.They managed to pass a decision that the grass could be sold only at the same price they paid to the House for it -8 dineros.The next year (28 December 1632) the House once again forbade the selling of the grasses.They argued that we have seen from experience the universal prejudice that results.The Assembly limited the discussion of this issue to the largest and better attended annual assembly. 24In the 1640s' of the century, the House accepted the proposal, but those against still managed to postpone its application to 1660 and included in the minutes that the decision was wrong for the common welfare.They linked four factors arguing that: as the pastures would be sold, they would end up in the hands of foreigners and people who would not look after them; they would put more animals than those due; the flocks of the small owners would shrink and those of the bigger ones would monopolize the best and largest fields. 21Archive of the Casa de Ganaderos, Actos Comunes from 1629 to 1645, 27: para prohibir que ningun ganaderos pueda vender las yerbas de los acampos de la dehessa de dicha Cassa que les caen por suerte y ordena que lo que se hizo y ordeno en el Capitulo General de San Pedro de 1626 que se nombrasen personas para que viesen la dicha ordinacion y que aquella se adaptase y reglase de modo que se prohiba con effecto el dar yerba en la dehesa a los que no traen sus ganados a ella ni acostumbran pacerla con ellos para beneficio universal. 22Archive of the Casa de Ganaderos, Actos Comunes from 1629 to 1645, 27: Que attendido los abusos que ay la razon de pidir yerba muchos ganaderos que no acostumbran venir a pacerlas con sus ganados diziendo tienen intento de traer a ellas sus ganados y después o permutan aquellos con otras yerbas de la dehessa de menos cantidad o mas ruynes que las suyas y que les ha caydo por suerte y dejan a la Cassa para que se sorteen las dichas yerbas ruynes o venden las tales yerbas a precios mejores de ocho dineros por cabeza que es el permitido… o acogen en sus acampos los ganaderos de otros, ovejas para parizonarlas… Que las hierbas… las dividan en suertes y en su dia las sorteasen entre todos los ganaderos y no se puedan dar ni vender ni disponer de ellas en manera alguna. 23o hazen otras cosas perjudiziales al intento principal que se lleba y tiene de que los acampos sean todo lo grandes que ser puedan… sea del estado, la condicion, dignidad y calidad que fuese el ganadero... 24 Archive of the Casa de Ganaderos, Actos Comunes from 1629 to 1645, 27, chapter of 28 December 1631 and 28 December 1632. These people knew that the community was at stake.They knew that the economic and social basis of the community was the equal access to the natural resources and the maintenance of the pastures out of the market, as was the norm during the 15th and 16th centuries.The complaints at the general meeting of 1666 -because of the increase in the numbers of animals since 1660, the leasing of some grazing grounds to some families for life since 1680, and eventually the change in the regulations of the community by 1699 to restrict 33 families to the right to lease exclusively the acampos, proved them right.The erosion of community control was a gradual process, linked to a longer time-frame of change of which we have this important milestone.The constitutional change at the end of the 17th century had consequences at different levels.The economic outcome was the increase in the number of cattle, cows and bulls, for the production of meat for the town, the overexploitation of the fertile plots of the Dehesa and the rise in the number of stationary animals.At a social and political level, the owners of less than 200 sheep were excluded from offices as a group of privileged sheepbreeders emerged.The environmental consequences were: the suspension of the inspections of the mountains and the boundaries of the commons as the new owners were not interested in the extensive graze, the enclosing of the acampos to preserve their new properties, and the rise in the number of animals per field.By the mid-19th century, the suburban landscape of Zaragoza was formed by unused land with patches dedicated either to cereal or to the hunting of small game (Germán Zubero 1979). Epilogue: communities and sustainability in the larger framework Traditions, customs, power, institutions and collective representations form the fabric in which the social reproduction of a community takes place.They determine the relationship between the community and its territory, the redistribution of wealth, the conflicts and their resolution.We can find rural communities, universitas and brotherhoods in Southern Europe with a great degree of corporatism.Collective bodies with clear boundaries, governed by themselves, organizations which formed the institutional infrastructure for collective action, independent units with their right of assembly, with usufruct over land and animals, representatives, systems of conflict resolution and fictive personalities recognized by external powers.In those places where animal husbandry was a major activity, flocks were usually owned individually by families, but natural resources were frequently managed collectively in large, common and open pastureland.Most of the infrastructures, such as water places, lambing sheds, sheep ways, resting places, bridges, cabins and pens were used collectively.All the members of the community knew that their animals would have a share of the boalar (enclosed grazing for the draught animals) the puertos (passes) and the dehesa (enclosed for livestock, sheep and goats).However, the universe of pastoralism is complex.Pastoral activities, unlike agriculture, do not work at a local level.On the contrary, they go beyond of the local power.However, they had to learn to negotiate with the inhabitants of the valleys and from the 15th century onwards they had to pay for what they used, acknowledging their rights. Third, the political participation of the members of the communities in the management of their commons is a key mechanism to consider in order to analyze economic performance, environmental sustainability and political resistance (Ostrom 1990).Political participation and behaviour monitoring guaranteed the definition of collective and common objectives of communities, prevented monopolization of power by elites, consolidated the identity of the corporation and tuned the process of appropriating the environment.However, rural communities developed in larger socio-economic and political frameworks which produced changes in the long-run.On the one hand, the exposure of these communities and associations to the privileges of universal pasture held by lords, religious houses or towns stressed the pressure upon the natural resources.On the other, these relationships provided new links, connections, networks, expectations, identifications and political attitude for some of the members of the rural communities.In the two case-studies presented in this paper the catalyst for the dissolution of the community was an internal process, a political process, within the community, set off by external forces that triggered new identifications of some fractions which pursued a change in the mechanisms of representation and participation. In the Pyrenean case, it seems a paradox, the contact with the lowlanders speeded up the relationship between payments and pastures, hence the process of alienation of the community from its resources.In the long-run, by the 17th century, it triggered rural emigration and the start of the decline of the economy of the mountains and transhumance.The consequence was the preservation of a large common land, a neat definition between the pasture and the cultivated land, the definition and regulation of the use of special dehesas, the reduction of agriculture to its minimum, the generalization of a low-benefit extensive livestock husbandry, and the existence of large semi-wild spaces in the mountains.In Zaragoza, the power of the Casa de Ganaderos had managed to keep a fragile ecosystem thanks to the strict regulation of practices during the winter, and to long periods of closing during the summer and autumn.The system created one of the largest commons in Europe and favoured an open landscape of wasteland and pastures in an arid place on the verge of ecological degradation.The break up of the House meant changes in management, in use rights, and property rights into the direction of imposing heavier pressure on the natural resources. These two cases make it increasingly more difficult to keep arguing that sheep breeding on common land or with collective practices have a specific impact in environmental or economic terms.Sheep breeding itself does not shape a specific landscape, nor produce economic growth or stagnation.Its impact relies on the socio-political constitution of local communities and their role at a regional and national level.
2018-12-29T16:18:41.585Z
2011-09-14T00:00:00.000
{ "year": 2011, "sha1": "5044d9c119350764c9cb3118b0b4bdc789a1ebb5", "oa_license": "CCBY", "oa_url": "http://www.thecommonsjournal.org/articles/10.18352/ijc.304/galley/249/download/", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "5044d9c119350764c9cb3118b0b4bdc789a1ebb5", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Sociology" ] }
158989658
pes2o/s2orc
v3-fos-license
Simplified Analytical Model and Shaking Table Test Validation for Seismic Analysis of Mid-Rise Cold-Formed Steel Composite Shear Wall Building : To develop the cold-formed steel (CFS) building from low-rise to mid-rise, this paper proposes a new type of CFS composite shear wall building system. The continuous placed CFS concrete-filled tube (CFRST) column is used as the end stud, and the CFS-ALC wall casing concrete composite floor is used as the floor system. In order to predict the seismic behavior of this new structural system, a simplified analytical model is proposed in this paper, which includes the following. (1) A build-up section with “new material” is used to model the CFS tube and infilled concrete of CFRST columns; the section parameters are determined by the equivalent stiffness principle, and the “new material” is modeled by an elastic-perfect plastic model. (2) Two crossed nonlinear springs with hysteretic parameters are used to model a composite CFS shear wall; the Pinching04 material is used to input the hysteretic parameters for these springs, and two crossed rigid trusses are used to model the CFS beams. (3) A linear spring is used to model the uplift behavior of a hold-down connection, and the contribution of these connections for CFRST columns are considered and individually modeled. (4) The rigid diaphragm is used to model the composite floor system, and it is demonstrated by example analyses. Finally, a shaking table test is conducted on a five-story 1:2-scaled CFS composite shear wall building to valid the simplified model. The results are as follows. The errors on peak drift of the first story, the energy dissipation of the first story, the peak drift of the roof story, and the energy dissipation of the whole structure’s displacement time–history curves between the test and simplified models are about 10%, and the largest one of these errors is 20.8%. Both the time–history drift curves and cumulative energy curves obtained from the simplified model accurately track the deformation and energy dissipation processes of the test model. Such comparisons demonstrate the accuracy and applicability of the simplified model, and the proposed simplified model would provide the basis for the theoretical analysis and seismic design of CFS composite shear wall systems. Introduction Cold-formed steel (CFS) composite shear wall buildings are widely used around the world due to their advantages, such as: light weight, recyclable materials, high assembly, short construction time cycle, etc. Such buildings were commonly used in low-rise villas or apartments within three stories. In China, this type of building system has attracted increasing attention from structural engineers. However, researchers [1] believe that mid-rise CFS buildings would be more appropriate for Chinese A Five-Story Shaking Table Test Model A 1:2 scaled five-story CFS composite shear wall building was designed and tested, as shown in Figure 1a. Such a building was designed according to the scale similarity. The prototype building was built in the a high seismic zone (eight-degree seismic fortification zone) in China, and it was designed according to the Chinese Code for the Seismic Design of Buildings GB 50011-2010 [30] and Chinese Technical Code of Cold-Formed Thin-Walled Steel Structures JGJ227-2011 [31]. The total height of the prototype building was 15 m, and it was 3 m for each story. The span was 3.6 m. The height of the scaled building was 7.5 m, and the span was 1.8 m. The scaled building was designed according to the geometrical, load, mass, stiffness, and initial condition similarities, and the scale similarity was shown in Table 1. The length and the width of the shaking table were 6 m and 4 m, respectively. Unidirectional earthquake was input by the Mechanical Testing System (MTS) actuator, and the layout of the test model on the shaking table is shown in Figure 1b. The bearing capacity of the shaking table is 25 t, and the maximum displacement of it is ±250 mm. The test model used the new scheme to realize the mid-rise construction, which was different from the low-rise construction scheme in the following two ways. (1) The continuous CFRST was used as end studs along the height, and the CFRST columns included "+"-shape inner columns, "T"-shape side columns, and "L"-shape corner columns, as shown in Figure 1b. (2) The CFS joists and the Autoclaved Lightweight Concrete (ALC) boards were used as a composite floor system, and the case-in-place concrete was placed on the composite floor to enhance the integrality of the floor system, as shown in Figure 2a. The details of the composite floor system, CFS studs placement, and the beam-column joints are shown in Figure 2. frequency, structural drift, and cumulative energy obtained from the simplified model are validated by the test results. A Five-Story Shaking Table Test Model A 1:2 scaled five-story CFS composite shear wall building was designed and tested, as shown in Figure 1a. Such a building was designed according to the scale similarity. The prototype building was built in the a high seismic zone (eight-degree seismic fortification zone) in China, and it was designed according to the Chinese Code for the Seismic Design of Buildings GB 50011-2010 [30] and Chinese Technical Code of Cold-Formed Thin-Walled Steel Structures JGJ227-2011 [31]. The total height of the prototype building was 15 m, and it was 3 m for each story. The span was 3.6 m. The height of the scaled building was 7.5 m, and the span was 1.8 m. The scaled building was designed according to the geometrical, load, mass, stiffness, and initial condition similarities, and the scale similarity was shown in Table 1. The length and the width of the shaking table were 6 m and 4 m, respectively. Unidirectional earthquake was input by the Mechanical Testing System (MTS) actuator, and the layout of the test model on the shaking table is shown in Figure 1b. The bearing capacity of the shaking table is 25 t, and the maximum displacement of it is ±250 mm. The test model used the new scheme to realize the mid-rise construction, which was different from the low-rise construction scheme in the following two ways. (1) The continuous CFRST was used as end studs along the height, and the CFRST columns included "+"-shape inner columns, "T"-shape side columns, and "L"-shape corner columns, as shown in Figure 1b. (2) The CFS joists and the Autoclaved Lightweight Concrete (ALC) boards were used as a composite floor system, and the case-in-place concrete was placed on the composite floor to enhance the integrality of the floor system, as shown in Figure 2a. The details of the composite floor system, CFS studs placement, and the beam-column joints are shown in The CFS composite shear walls used in the test model include the shear walls along the direction of earthquake (one-axial, two-axial, and three-axial), and the shear walls without earthquake input (A-axial, B-axial, and C-axial), as shown in Figures 1 and 2. All of the shear walls were sheathed with 12-mm gypsum wallboard at both sides, and were connected with the CFS The CFS composite shear walls used in the test model include the shear walls along the direction of earthquake (one-axial, two-axial, and three-axial), and the shear walls without earthquake input (A-axial, B-axial, and C-axial), as shown in Figures 1 and 2. All of the shear walls were sheathed with 12-mm gypsum wallboard at both sides, and were connected with the CFS frames through screws. A 0.9-m height and 0.6-m width door opening was placed on each shear wall without earthquake input, as shown in Figure 1b. The CFS composite floor was placed between each story, and it included build-up I-shape CFS beams that were 150 mm in height (it is built by U150 × 50 × 0.8, unit: mm), 50-mm thick ALC boards, and a 30-mm thick concrete floor, as shown in Figure 2c. Two C89 studs (C89 × 50 × 13 × 0.8, unit: mm) were included in each shear wall, and the distance of these studs was 600 mm, as shown in Figure 1b. Hold-downs were placed at the ends of the CFRST columns, and the height of them was 270 mm. Such hold-downs were connected with the CFRST columns through screws. frames through screws. A 0.9-m height and 0.6-m width door opening was placed on each shear wall without earthquake input, as shown in Figure 1b. The CFS composite floor was placed between each story, and it included build-up I-shape CFS beams that were 150 mm in height (it is built by U150 × 50 × 0.8, unit: mm), 50-mm thick ALC boards, and a 30-mm thick concrete floor, as shown in Figure 2c. Two C89 studs (C89 × 50 × 13 × 0.8, unit: mm) were included in each shear wall, and the distance of these studs was 600 mm, as shown in Figure 1b. Hold-downs were placed at the ends of the CFRST columns, and the height of them was 270 mm. Such hold-downs were connected with the CFRST columns through screws. In total, five displacement sensors and five acceleration sensors were placed on each story to measure the lateral displacement and acceleration of the test model subject to the earthquake, as shown in Figure 3. The El Centro earthquake along the north-south direction was used as the prototype earthquake, as shown in Figure 4. The earthquakes were input on the test model In total, five displacement sensors and five acceleration sensors were placed on each story to measure the lateral displacement and acceleration of the test model subject to the earthquake, as shown in Figure 3. The El Centro earthquake along the north-south direction was used as the prototype earthquake, as shown in Figure 4. The earthquakes were input on the test model consequently according to the peak acceleration on a multiple of 100 gal, and the vibration results of the test model were recorded. Simplified Analytical Model of the Test Specimen The simplified analytical model of the test specimen is presented in Figure 5. Since the two-axial is the symmetry axial for the test model, only the simplified models for the one-axial and two-axial shear walls are shown in Figure 5. Such simplified models are realized by the Open System of Earthquake Engineering Simulation software OpenSees [32]. The simplified model follows the following estimations. (1) A build-up section beam is idealized as two crossed rigid links, and such links are pinned with CFRST columns. The CFS composite floor system is idealized as a rigid plane, and the rigid plane is pinned with the CFRST columns. (2) A CFS composite shear wall (including the sheathing wallboards and the CFS studs) is idealized as two crossed nonlinear springs, and the hysteretic characteristics of the composite shear wall are represented by the nonlinear springs. (3) The hold-down connections at the ends of the CFRST columns are idealized as rigid connections, and the uplift behavior of the hold-down connections is modeled by an axial linear spring according to suggestions of previous study [26]. Thus, in the simplified model, three rotational freedoms and two translation freedoms are restrained, and the axial translation freedom is restrained by the axial linear spring. Simplified Analytical Model of the Test Specimen The simplified analytical model of the test specimen is presented in Figure 5. Since the two-axial is the symmetry axial for the test model, only the simplified models for the one-axial and two-axial shear walls are shown in Figure 5. Such simplified models are realized by the Open System of Earthquake Engineering Simulation software OpenSees [32]. The simplified model follows the following estimations. (1) A build-up section beam is idealized as two crossed rigid links, and such links are pinned with CFRST columns. The CFS composite floor system is idealized as a rigid plane, and the rigid plane is pinned with the CFRST columns. (2) A CFS composite shear wall (including the sheathing wallboards and the CFS studs) is idealized as two crossed nonlinear springs, and the hysteretic characteristics of the composite shear wall are represented by the nonlinear springs. (3) The hold-down connections at the ends of the CFRST columns are idealized as rigid connections, and the uplift behavior of the hold-down connections is modeled by an axial linear spring according to suggestions of previous study [26]. Thus, in the simplified model, three rotational freedoms and two translation freedoms are restrained, and the axial translation freedom is restrained by the axial linear spring. Modeling the CFRST Column According to the failure modes of the shaking table test model, the CFRST columns buckled under the combination of gravity loads and earthquakes; thus, the nonlinear behavior of the CFRST columns should be considered in the simplified model. In previous studies, the end studs were simplified as a linear truss [6], or a CFRST column was simplified as a linear beam-column element and a linear spring [26]. However, such simplifications cannot capture the nonlinear behavior of the Modeling the CFRST Column According to the failure modes of the shaking table test model, the CFRST columns buckled under the combination of gravity loads and earthquakes; thus, the nonlinear behavior of the CFRST columns should be considered in the simplified model. In previous studies, the end studs were simplified as a linear truss [6], or a CFRST column was simplified as a linear beam-column element and a linear spring [26]. However, such simplifications cannot capture the nonlinear behavior of the CFRST columns. Padilla-Llano [33] proposed a complete nonlinear hysteretic model for each CFS stud; however, as stated by Leng et al. [21], such a model would result in excessive computational cost or computational non-convergence. Therefore, this paper simplifies the CFRST columns as a build-up section with "new material", and the columns are modeled by the nonlinear beam-column elements of OpenSees. The material properties of the build-up section are listed in Table 2 according to the mechanical interaction between the CFS and infilled concrete, including axial force, moment, and torque. The section parameters of the build-up section are determined according to the equivalent bending stiffness of the CFRST columns, the resistance of which is determined according to the American Institute of Steel Construction-Load and Resistance Factor Design (AISC-LRFD) [34,35]: where E s , E c , and E m are the elastic modulus for the CFS members, the infilled concrete, and the "new material", respectively; A s and A c are the area of the CFS members and concrete in CFRST columns, respectively; f y and f c are the yield strength and compressive strength of the CFS and concrete, respectively; and N AISC is the axial compressive capacity of the CFRST columns. Since the elastic-perfect plastic model is used for the build-up section in the cases of axial force and moment, thus, the yield strength of the "new material" is simplified as N AISC /A e , where A e is the area of the build-up section. Due to the CFRST columns being designed as "+"-shape, "T"-shape, and "L"-shape sections for the test model, as shown in Figure 6, it is thus necessary to calculate the section and material parameters for them; the results are shown in Table 3. In Table 3, the bottom region of the CFRST columns is the region considering the contribution of the hold-down members, which can be seen in Figure 5. In the simplified model, the contributions on bending stiffness and axial compressive stiffness of the hold-down members are considered; such regions are individually modeled, and the section and material parameters of these regions are listed in Table 3. CFRST columns. Padilla-Llano [33] proposed a complete nonlinear hysteretic model for each CFS stud; however, as stated by Leng et al. [21], such a model would result in excessive computational cost or computational non-convergence. Therefore, this paper simplifies the CFRST columns as a build-up section with "new material", and the columns are modeled by the nonlinear beam-column elements of OpenSees. The material properties of the build-up section are listed in Table 2 according to the mechanical interaction between the CFS and infilled concrete, including axial force, moment, and torque. where Es, Ec, and Em are the elastic modulus for the CFS members, the infilled concrete, and the "new material", respectively; As and Ac are the area of the CFS members and concrete in CFRST columns, respectively; fy and fc are the yield strength and compressive strength of the CFS and concrete, respectively; and NAISC is the axial compressive capacity of the CFRST columns. Since the elastic-perfect plastic model is used for the build-up section in the cases of axial force and moment, thus, the yield strength of the "new material" is simplified as NAISC/Ae, where Ae is the area of the build-up section. Due to the CFRST columns being designed as "+"-shape, "T"-shape, and "L"-shape sections for the test model, as shown in Figure 6, it is thus necessary to calculate the section and material parameters for them; the results are shown in Table 3. In Table 3, the bottom region of the CFRST columns is the region considering the contribution of the hold-down members, which can be seen in Figure 5. In the simplified model, the contributions on bending stiffness and axial compressive stiffness of the hold-down members are considered; such regions are individually modeled, and the section and material parameters of these regions are listed in Table 3. Note: E m , A e , f ey , and I ex are the elastic modulus, area, yield strength, and inertia moment along the direction of the earthquake, respectively; the bottom region of the column is the region strengthened by the hold-down (the height is 270 mm, as shown in Figure 5); A es , f esy , and I esx are the equivalent area, yield strength, and inertia moment of the CFRST column at the strengthened region. Modeling the CFS Composite Shear Wall The CFS composite shear wall includes build-up I-shape CFS beams, gypsum wallboards, and CFS studs. Two crossed rigid trusses are used to model a build-up I-shape CFS beam, because no obvious damage and deformation were observed on these CFS beams, according to the shaking table test. Two crossed two-node link elements are used to model the nonlinear behavior of a composite shear wall, and Pinching04 material is used to represent the hysteretic behavior of these two-node link elements. The hysteretic parameters for the Pinching04 material are always determined from the cyclic test results of the prototype CFS composite shear walls. As the composite shear wall is simplified as two crossed elements; thus, the relation between the load-displacement curve of the shear wall and the load-displacement curve of the simplified element is following the geometric relationship, which is also presented in Figure 7: where θ is the angle between the simplified element and the top track of the shear wall; F and ∆ w are the load and the displacement of the shear wall, respectively; and F and ∆ w are the load and the displacement of the simplified element, respectively. However, there is not cyclic test data for the 1:2 scaled CFS composite shear wall constructed in the test model of this paper. Thus, this paper determines the hysteretic parameters for the scaled composite shear wall according to the fastener-based model, as shown in Figure 8a; such a model was proposed by the CFS-NESS team [36]. In the fastener-based model, the wallboard is estimated as a rigid plane, a screw is idealized as a nonlinear spring with hysteretic parameters, and the hysteretic parameters are obtained from the cyclic tests on fasteners. Such a model was verified by cyclic test results [21]; the authors' research team validated such a model through cyclic tests on single-story and double-story CFS composite shear walls [26], and the results of specimen W1 are depicted in Figure 8b. Such a model can be realized by the following steps. (1) Based on the fastener-based model, the hysteretic curves for the target CFS composite shear walls can be obtained. (2) The hysteretic parameters for the target shear wall can be obtained from the hysteretic curves, as shown in Figure 9. (3) The hysteretic parameters of the nonlinear springs for the target shear wall can be determined according to Equations (1) and (2), and such parameters can be used in the simplified model. Thus, the hysteretic parameters for the composite shear walls that are used in the test model are listed in Table 4. Note: epd 1 -epd 4 , epf 1 -epf 4 , rDispP, rforceP, and uforceP are the hysteretic parameters of the shear walls, as shown in Figure 5; a Klimit , a Dlimit , and a Flimit are the damage factors to describe the hysteretic characteristics of the shear walls, which can be determined by the stiffness and strength of the loading and unloading stages for the Pinching04 model in Figure 5. Modeling the Hold-Down Connections Due to the restraining effects of hold-down connections on CFRST columns, the uplift behavior might occur on these connections [6,21,26]. Thus, the deformation of the uplift behavior should be considered in the simplified model. According to the shaking table test results, there were not obvious damages and deformation observed on the hold-down connections; thus, such connections were estimated as working in an elastic stage according the suggestions of cyclic tests on composite shear walls by the authors' research team [26]. Therefore, a linear spring is used to model the uplift behavior of a hold-down connection, according to the suggestions of Shamim and Rogers [6] and Wang et al. [26], and such a spring considers the tensile deformation, but does not consider the compressive deformation. Due to the same size and same type of hold-down connection being used in the shaking table test model of this paper and the cyclic test model W89 shear wall of the authors' previous study [26], the stiffness of the linear spring was 39.2 kN/mm for two hold-down connections. In the shaking table test model, two, three, and four hold-down connections were used for the corner column, side column, and inner column, respectively. Thus, the stiffness of these springs is k corner = 39.2 kN/mm, k side = 58.8 kN/mm, and k mid = 78.4 kN/mm for the corner column, side column, and inner column, respectively. Modeling the Composite Floor System Leng et al. [21] proposed rigid and semi-rigid diaphragm models for the composite floor system in the three-dimensional (3D) numerical model. The semi-rigid diaphragm model determined the shear deformation response by using engineering judgment, and it was found that the semi-rigid diaphragm was more appropriate for the CFS composite floor system. However, a new ALC wallboard CFS composite floor system is proposed and used for the shaking table test model by Ye et al. [37], as shown in Figure 2. The new floor system is different from the one used by Leng et al. [21]: the CFS joists and ALC wallboard (thickness of 50 mm to 100 mm) are used as the composite floor, and the cast-in-place concrete (thickness of 30 mm to 50 mm) is then poured on the composite floor to enhance its integrality. Among the load-carrying capacity, stiffness, fire resistance, and integrality would be effectively improved for the covering of the ALC wallboard and cast concrete [37]. Due to the complicate calculating processes and the engineering-based judgment of the semi-rigid diaphragm model proposed by Leng et al. [21], this paper adopts the rigid diaphragm model in the simplified model for the new composite floor system. To valid the simplified model, both semi-rigid and rigid diaphragm models are used to analyze the roof displacement time-history curves of the shaking table test model in the cases of 200 gal and 800 gal. In the simplified model, the RigidPlane element is used for the rigid diaphragm model, and crossed truss elements are used for the semi-rigid diaphragm model, according to the principle of equivalent stiffness. The results are shown in Figure 10, which demonstrate that the roof displacement time-history curves for rigid diaphragm model and semi-rigid diaphragm model closely resemble both the 200-gal case and the 800-gal case. Therefore, the rigid diaphragm model is used to model the new composite floor system in the shaking table test model. Mass and Damping Ratio of the Building The mass of a story is directly input on the rigid plane of this story, and it would be evenly distributed to the floor, CFRST columns, and studs through the rigid plane. The mass of a story includes the parts of the composite floor, half of the CFS shear walls and CFRST columns in the upper and lower stories, and the additional mass for modeling the live loads. The mass is 3753 kg and 1362 kg for the standard floor (first story to fourth story) and roof story (fifth story), respectively. Besides, the P-delta effect is also considered in the simplified model. Firstly, nine individual regions are divided for the floor according to the action area of the nine CFRST columns, and the gravity load of each region is input on the joint between the floor and the CFRST column corresponding to the region. The gravity load is directly input on the intersecting joint, and the position of the gravity load would be changed following the lateral deformation of the joint of the CFRST column due to the earthquake. Therefore, the P-delta effect can be considered in the simplified model. The Rayleigh damping ratio is used in the simplified model. Shamim and Rogers [6] stated that the Rayleigh damping ratio showed significant influence of the numerical results for the CFS buildings, and that a 4% to 5% damping ratio was suitable for the CFS building through trial calculation. It was also found that such a value was larger than the commonly used value of steel structures (2% damping ratio), because the friction behavior would occur in the fabricated screw connections in the shear walls and the high-strength bolt connections at the column bases, and such behavior would increase the damping ratio of the CFS buildings. In the meanwhile, a 5% damping ratio was also proposed by Leng et al. [21] for CFS buildings, and the shaking table test results on a two-story full-scale CFS building were used to valid the value of the damping ratio. Therefore, the Rayleigh damping ratio of 5% is used in the simplified model of this paper. Mass and Damping Ratio of the Building The mass of a story is directly input on the rigid plane of this story, and it would be evenly distributed to the floor, CFRST columns, and studs through the rigid plane. The mass of a story includes the parts of the composite floor, half of the CFS shear walls and CFRST columns in the upper and lower stories, and the additional mass for modeling the live loads. The mass is 3753 kg and 1362 kg for the standard floor (first story to fourth story) and roof story (fifth story), respectively. Besides, the P-delta effect is also considered in the simplified model. Firstly, nine individual regions are divided for the floor according to the action area of the nine CFRST columns, and the gravity load of each region is input on the joint between the floor and the CFRST column corresponding to the region. The gravity load is directly input on the intersecting joint, and the position of the gravity load would be changed following the lateral deformation of the joint of the CFRST column due to the earthquake. Therefore, the P-delta effect can be considered in the simplified model. The Rayleigh damping ratio is used in the simplified model. Shamim and Rogers [6] stated that the Rayleigh damping ratio showed significant influence of the numerical results for the CFS buildings, and that a 4% to 5% damping ratio was suitable for the CFS building through trial calculation. It was also found that such a value was larger than the commonly used value of steel structures (2% damping ratio), because the friction behavior would occur in the fabricated screw connections in the shear walls and the high-strength bolt connections at the column bases, and such behavior would increase the damping ratio of the CFS buildings. In the meanwhile, a 5% damping ratio was also proposed by Leng et al. [21] for CFS buildings, and the shaking table test results on a two-story full-scale CFS building were used to valid the value of the damping ratio. Therefore, the Rayleigh damping ratio of 5% is used in the simplified model of this paper. Validation of Simplified Models by Shaking Table Test To validate the simplified analytical model, a five-story CFS 1:2 scaled composite shear wall building was tested and compared with the analytical results. Due to the relative slight residual deformation (less than 0.5 times of the value of maximum drift) of the test model that was subjected to different earthquake cases, the numerical analysis is individually proceeded for each earthquake case (including 300 gal, 500 gal, and 800 gal cases, which are used to valid the simplified models) according to the similar founding in the two-story full-scaled shaking test model by Peterman et al. [17,18]. To start with, white noise analysis with 100-gal peak acceleration was performed to obtain the natural frequency of the test model, which was 4.98 HZ along the earthquake direction. The first natural frequency of the simplified analytical model was 5.07 HZ along the same direction, and the error between the test and analytical models was 1.8%. The reason for the error can be concluded as: the boundary condition at the column base is considered as a rigid connection in the simplified model, and such an estimation would overestimate the lateral stiffness of the test model; thus, the natural frequency of the analytical model is larger than the value of the test model. Figures 11-13 show the comparisons on first-story displacement time-history curves, the energy dissipation of the shear wall in the first story, the roof displacement time-history curves, and energy dissipation of whole structure between the shaking table test and analytical results corresponding to the 300-gal, 500-gal, and 800-gal cases, respectively. It can be seen that the simplified analytical model captures the dynamic responses of the shaking table model, and predicts the time-history processes and energy dissipation of the test model within different earthquake cases effectively. Such comparisons demonstrate the validity of the proposed simplified analytical model. By comparing the test and analytical results in the 300-gal, 500-gal, and 800-gal cases, the error among the peak drift of the first story, cumulative energy of the first story, the peak drift of the roof story, and the cumulative energy of the whole structure are no more than 20.8%, and most of the errors are about 10%, as shown in Table 5. Such comparisons are also demonstrating the high precision of the simplified analytical model. Validation of Simplified Models by Shaking Table Test To validate the simplified analytical model, a five-story CFS 1:2 scaled composite shear wall building was tested and compared with the analytical results. Due to the relative slight residual deformation (less than 0.5 times of the value of maximum drift) of the test model that was subjected to different earthquake cases, the numerical analysis is individually proceeded for each earthquake case (including 300 gal, 500 gal, and 800 gal cases, which are used to valid the simplified models) according to the similar founding in the two-story full-scaled shaking test model by Peterman et al. [17,18]. To start with, white noise analysis with 100-gal peak acceleration was performed to obtain the natural frequency of the test model, which was 4.98 HZ along the earthquake direction. The first natural frequency of the simplified analytical model was 5.07 HZ along the same direction, and the error between the test and analytical models was 1.8%. The reason for the error can be concluded as: the boundary condition at the column base is considered as a rigid connection in the simplified model, and such an estimation would overestimate the lateral stiffness of the test model; thus, the natural frequency of the analytical model is larger than the value of the test model. Figures 11-13 show the comparisons on first-story displacement time-history curves, the energy dissipation of the shear wall in the first story, the roof displacement time-history curves, and energy dissipation of whole structure between the shaking table test and analytical results corresponding to the 300-gal, 500-gal, and 800-gal cases, respectively. It can be seen that the simplified analytical model captures the dynamic responses of the shaking table model, and predicts the time-history processes and energy dissipation of the test model within different earthquake cases effectively. Such comparisons demonstrate the validity of the proposed simplified analytical model. By comparing the test and analytical results in the 300-gal, 500-gal, and 800-gal cases, the error among the peak drift of the first story, cumulative energy of the first story, the peak drift of the roof story, and the cumulative energy of the whole structure are no more than 20.8%, and most of the errors are about 10%, as shown in Table 5. Such comparisons are also demonstrating the high precision of the simplified analytical model. From Figures 11-13, it can be found that the cumulative energies both of the first story and the whole structure of the test model are larger than the analytical model, and the distance increases with the increasing peak acceleration of the earthquake. The reason can be drawn as follows. (1) The analytical model underestimates the energy dissipation of the structure, because the Pingching04 hysteretic model stipulated that the unloading stiffness was lower than the loading stiffness, but the unloading stiffness of the test shear wall was larger than the loading stiffness, as shown in Figure 8b (the opposition friction force derived from the pretension force of the screws between the CFS studs and gypsum wallboard). (2) The connections between the CFS beams and CFRST columns are idealized as pin connections in the analytical model; thus, the energy dissipation of these connections cannot be considered. Furthermore, both the peak drift of the first story and the peak drift of the roof story of the analytical results are larger than the values of the test, and the errors between them are not related to the peak acceleration of earthquake. Since the bond curve of the Pinching04 hysteretic model is divided into four segments, thus, the consecutive degradation relationship of the stiffness and strength of the composite shear walls is simplified as piecewise functions. This is the reason why there is an error between the tested and analyzed results. From Figures 11-13, it can be found that the cumulative energies both of the first story and the whole structure of the test model are larger than the analytical model, and the distance increases with the increasing peak acceleration of the earthquake. The reason can be drawn as follows. (1) The analytical model underestimates the energy dissipation of the structure, because the Pingching04 hysteretic model stipulated that the unloading stiffness was lower than the loading stiffness, but the unloading stiffness of the test shear wall was larger than the loading stiffness, as shown in Figure 8b (the opposition friction force derived from the pretension force of the screws between the CFS studs and gypsum wallboard). (2) The connections between the CFS beams and CFRST columns are idealized as pin connections in the analytical model; thus, the energy dissipation of these connections cannot be considered. Furthermore, both the peak drift of the first story and the peak drift of the roof story of the analytical results are larger than the values of the test, and the errors between them are not related to the peak acceleration of earthquake. Since the bond curve of the Pinching04 hysteretic model is divided into four segments, thus, the consecutive degradation relationship of the stiffness and strength of the composite shear walls is simplified as piecewise functions. This is the reason why there is an error between the tested and analyzed results. Conclusions This paper proposes a simplified analytical numerical model for the seismic analysis of mid-rise CFS composite shear wall building systems. The simplified model considers the mechanical behaviors of CFRST columns, composite CFS shear walls, hold-down connections, and composite floor systems, and detailed modeling methods are described for this simplified model. Finally, shaking table test results on a five-story 1:2-scaled CFS composite shear wall building are used to valid the simplified model, and the following conclusions are drawn: 1. The nonlinear mechanical behavior of the CFRST columns is considered in the simplified model, including the bucking behavior and the yielding of materials. A build-up section with "new material" is proposed to model the CFS tube and infilled concrete, and the equivalent stiffness principle is used to determine the section parameters. The material property of the "new material" is modeled by an elastic-perfect plastic model, and the equivalent yield strength is determined by AISC-LRFD guidance. Besides, the contribution of the hold-down connections on the lateral stiffness and axial strength of the column base of the CFRST columns is also considered in the simplified model, and the strengthened region (270 mm in height) is separately modeled with the CFRST column. Among the "+"-shape inner CFRST columns, the "T"-shape side CFRST columns, and the "L"-shape corner CFRST columns, their strengthened regions are modeled individually in the simplified model. 2. Two crossed nonlinear springs with hysteretic parameters are used in the simplified model to model the hysteretic behavior of a composite CFS shear wall subjected to earthquakes, and such behaviors are modeled by Pinching04 material. Two crossed rigid trusses are used to model a CFS beam. The fastener-based modeling method is used to determine the hysteretic parameters of the 1:2-scaled composite shear walls due to no cyclic test data for them. 3. A linear spring is used to model the uplift behavior of a hold-down connection in the simplified model, and the stiffness of this linear spring is determined by the cyclic test results of the composite shear walls. The stiffness of this linear spring is determined according to the numbers of hold-down connections for the CFRST inner columns, side columns, and corner columns, respectively. 4. To improve the computational efficiency of the simplified model, the rigid diaphragm method is used to model the composite floor system, and such a method is demonstrated by example analyses. To sum up, the simplified model of the shaking table test model is built by OpenSees software, according to the above methodologies. By comparing the peak drift of the first story, the energy dissipation of the first story, the peak drift of the roof story, and the energy dissipation of the whole structure of the displacement curves between the simplified model and the test model, it is found that the errors between them are about 10%, and the largest one of these errors is 20.8%. It is also found that the rules of change of the time-history curves and cumulative energy curves obtained from the simplified model align closely with the measured results of the test model. The simplified model accurately tracks the whole deformation and energy dissipation processes of the test model. These demonstrations indicate that the simplified model exhibits high computational accuracy. Such works provide the basis for the theoretical analysis and seismic design of mid-rise CFS composite shear wall buildings. Author Contributions: All authors contribute equally to this paper.
2019-05-20T13:06:00.894Z
2018-09-06T00:00:00.000
{ "year": 2018, "sha1": "e1fa4020bdffff16a63b81f41dbbcff0946ba5a6", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/10/9/3188/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "a1116ea16ff0212048109e7f4bc9256300bf91ae", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Economics" ] }
257985107
pes2o/s2orc
v3-fos-license
Freeze-in Dark Matter via Lepton Portal: Hubble Tension and Stellar Cooling We propose a new freeze-in dark matter candidate which feebly couples to the standard model charged leptons. The feeble interactions allow it (i) to freeze-in from the Standard Model thermal bath with its relic density being either a fraction or the entirety of the observed dark matter density and (ii) to radiatively decay to two photons in the dark matter mass ranges of order keV scale with lifetime larger than the age of Universe. These features make this model a realistic realization of dark matter with late-time decay to reduce Hubble tension. We show the best-fit value of H_{0}=68.31(69.34) km s^{-1}Mpc^{-1} in light of Planck 2018+BAO(+LSS)+Pantheon data sets. We then use stellar cooling data to place constraints on the parameter space favored by the Hubble tension. While the universal coupling scenario is excluded, the hierarchical coupling scenario can be tested by future observations of white dwarfs after a careful look into photon inverse decay, Primakoff and Bremsstrahlung emission of the dark matter in various stellar systems. The viable parameter space may be linked to anomalies in future X-ray telescopes. Introduction Searches of dark matter (DM) from the first direct detection made by [1] to cutting-edge experiments such as LZ [2], XENONnT [3], DarkSide-50 [4] and SENSEI [5] have significantly improved the exclusion limits on DM scattering cross sections either off nucleons or electrons for it being a weakly interacting massive particle (WIMP). The null experimental results on WIMP-like DM, which is motivated by a variety of new physics models beyond Standard Model (SM) related to electroweak symmetry breaking, initiate the studies of alternative DM scenarios. As a representative alternative, freeze-in DM can be produced in the early Universe through the freeze-in mechanism [6] as a result of feeble interactions with SM thermal bath, see [7] for a review. So far, freeze-in DM via the standard model neutrino portal [8][9][10][11][12][13][14] and Higgs portal [15,16] have been studied in the literature. In this study we propose a new freeze-in DM through the standard model charged lepton portal with the following Lagrangian where ℓ = {e, ν, τ } are the charged leptons, ϕ is a dark scalar degree of freedom with mass m ϕ without either baryon or lepton number, and λ ℓ is the coupling constant. 1 Note, a feeble coupling λ ℓ makes this model different from the WIMP-like DM studied by [18,19] from a viewpoint of phenomenology. The aims of this study are two-fold. The first task is to make Markov Chain Monte Carlo (MCMC) analysis of our model in light of cosmological data sets. While the cosmological constraints arising from Cosmic Microwave Background (CMB) and Large-scale Structure (LSS) can be accommodated in the course of MCMC analysis as in the aforementioned studies, how to detect the freeze-in DM model is rather challenging. In practice, due to the feeble couplings and the light mass ϕ should have a role to play in various astrophysical stellar systems [45] similar to light axion, dark photon or mini-charged particles. The second task of this work is to use stellar cooling data, some of which can be very precise, to place constraints on the feeble couplings. These constraints are expected to be more stringent than current ground-based experiments. The rest of the paper is organized as follows. In Sec.2 we calculate the relic abundance of ϕ via the freeze-in mechanism, which will be specifically divided into two cases, i.e, the light mass range m ϕ < 2m ℓ and the heavy mass range m ϕ > 2m ℓ corresponding to the loopand tree-level inverse decay respectively. We will show that the relic density can fully or partially accommodate the observed DM density while the lifetime is larger than the age of Universe in the case of light mass range. In Sec.3 we firstly discuss the impacts of the latetime decays of DM into photons on cosmological observables such as the CMB and matter power spectra, then address the parameter space which reduces the Hubble tension in terms of MCMC fit to two different cosmological data sets. Afterward, we explicitly explore the photon inverse decay, Primakoff and Bremsstrahlung emission of ϕ in various stellar systems. 1 Although explicit realizations of this effective interaction are beyond the scope of this study, it can be constructed e.g., via coupling ϕ to vector-like fermions [17] that mix with the SM leptons, where the feeble coupling reads as λ ℓ ∼ ϵ 2 (υ/M ℓ ) 2 with ϵ, υ and M ℓ referring to the small mixing angle, the electroweak scale and the vectorlike lepton mass respectively. As shown in [17], M ℓ below ∼ 200 GeV has been excluded by the LHC searches on multi-lepton final states. It tuns out that current stellar cooling data has excluded the parameter space within the universal coupling scenario while that of the hierarchical coupling scenario can be tested by future observations of white dwarfs. Finally we conclude in Sec.5. The dark matter model In this section we discuss the relic abundance of scalar ϕ in the model defined as in Eq.(1), which is subject to freeze-in and subsequent decay. The freeze-in production of ϕ particle is through the inverse decay process ℓ − ℓ + → ϕ. This occurs whenever m ϕ is bigger than center-of-mass energy of the two incoming leptons, i.e, m ϕ ≥ 2m ℓ . On the contrary, in the mass ranges of m ϕ < 2m ℓ the tree-level inverse decay is replaced by one-loop analogy γγ → ϕ via ℓ triangle diagram. 2 Either of these two inverse decays contributes to the number density of ϕ particle as [6] where X, Y refer to ℓ (γ) in the tree (loop)-level decay, Πs are phase space elements [46] and f s are phase space densities. Here, | M | 2 X+Y →ϕ is squared amplitude of the inverse decay X + Y → ϕ, which is equal to | M | 2 ϕ→X+Y in the case of CP conservation as we assume. Solving Eq.(2) in terms of a new variable Y ϕ ≡ n ϕ /s with s entropy density of thermal bath gives the relic density [6] where g s and g ρ are the number of degrees of freedom in entropy and energy density respectively, and is the decay width with 47,48] approximately equal to −4/3 in the large x limit. There is a sum over flavor in Eq.(4) if needed. In addition, the freeze-in production of ϕ particles can be through annihilation processes such as ℓ + ℓ − → γϕ. Such processes contribute to the number density as follows [6] where X, Y and Z denote ℓ and γ. Under the limit m ϕ << m ℓ the 2 → 2 contribution in Eq.(5) is given by which is the same order of λ 2 ℓ as the 2 → 1 process in Eq.(3). Compared to the 2 → 1 contribution, the 2 → 2 contribution is subdominant (dominant) in the case with m ϕ > 2m ℓ (m ϕ < 2m ℓ ), verified by a numerical analysis via the publicly available code micrOMEGAs5.0 [49] adopted in this work. In terms of the freeze-in abundance, one obtains the present relic density of ϕ where Ω ϕ h 2 | t * is the freeze-in relic abundance with t * the end time of freeze-in process and t 0 ≈ 13.78 Gyr is the age of Universe. If ϕ's lifetime τ ϕ = Γ −1 ϕ is at least a few times larger than t 0 the e-factor in Eq.(7) can be simply neglected. In contrast, if τ ϕ is far smaller than t 0 , the e-factor takes over the freeze-in contribution and makes ϕ be irrelevant in the evolution of Universe. To serve our purpose we consider ϕ satisfying the following two conditions: • the fraction parameter f ϕ ≡ Ω ϕ /Ω ini cdm is less than unity, with Ω ini cdm the value of DM relic density reported by Planck 2018 data [40]. • the lifetime τ ϕ is larger than the age of Universe t 0 . With these two features, ϕ is a natural realization of DM with late-time decay. m ϕ < 2m ℓ Hierarchical coupling scenario. We firstly consider the case of the hierarchical coupling scenario with λ e << λ µ << λ τ = λ. The left panel of Fig.1 shows the relic abundance of ϕ projected to the plane of m ϕ − λ in the mass region 1 keV < m ϕ < 2m τ . Explicitly, we show the contours of f ϕ and τ ϕ in units of t 0 in dashed and solid respectively. The highlighted regions point to f ϕ ∼ 1% − 100% and τ ϕ /t 0 ∼ 1 − 10 2 . In this figure the shaded gray (purple) region is excluded by overproduction (τ ϕ < t 0 ). Universal coupling scenario. We now consider the case of the universal coupling scenario with λ e ≈ λ µ ≈ λ τ = λ. Compared to the hierarchical coupling scenario, each previous subprocess is now replaced by three copies of it, i.e, Meanwhile, the mass range is now m ϕ < 2m e instead of m ϕ < 2m τ . The right panel of Fig.1 presents the relic abundance of ϕ projected to the plane of m ϕ − λ in the mass region 1 keV < m ϕ < 2m e , where the contours of f ϕ and τ ϕ are illustrated in the same way as in the left panel. Similar to the hierarchical coupling scenario, the highlighted regions point to f ϕ ∼ 0.1% − 1% and τ ϕ /t 0 ∼ 1 − 10. ( G e V ) Figure 1: Relic abundance of ϕ in the mass region with 1 keV < m ϕ < 2m ℓ in the hierarchical coupling scenario with λ e << λ µ << λ τ = λ (left) and the universal coupling scenario with λ e ≈ λ µ ≈ λ τ = λ (right). Contours of f ϕ and τ ϕ (in units of t 0 ) are shown in dashed and solid respectively. Shaded gray region is excluded by overproduction, while shaded purple region is excluded by τ ϕ < t 0 . In the left panel the four benchmark points will be used in Fig.3 and Fig.4. m ϕ > 2m ℓ As seen in Eq.(4), Γ ϕ in the mass range with m ϕ > 2m ℓ is dominated by the tree-level decay instead of radiative decay as studied above. The requirement τ ϕ ≥ t 0 immediately implies that the magnitude of λ is smaller than ∼ 10 −20 − 10 −19 . This is explicitly shown in Fig.2 where we plot contours of ϕ relic abundance and τ ϕ , with the left and right panel therein corresponding to the hierarchical and universal coupling scenario respectively. Either in the hierarchical or universal coupling scenario the small λ now results in a negligible relic abundance in the parameter regions with τ ϕ ≥ t 0 , compared to the case of radiative decay as shown in Fig.1. We would like to remind the reader that in the universal coupling scenario ϕ decays to both γ and ℓ in the mass range of m e < m ϕ < m τ corresponding to the radiative and tree-level decay respectively, with ℓ at least composed of e. Cosmological constraints The late-time decay of DM affects both background and perturbations of cosmological surveys. In this section we focus on the DM model with m ϕ < 2m ℓ and the hierarchical coupling scenario for illustration. The reason for neglecting the universal coupling scenario will be explained in Sec.4. The impacts on the background directly follow the background equations of DM energy density ρ ϕ and radiation energy density ρ r [26,32] where primes denote derivatives with respect to conformal time and a is the scale factor. The effects of the late-time decay on the perturbations can be derived from the linear perturbation equations of the radiation and DM denoted by δ r and δ ϕ respectively in synchronous gauge as where the results at the order ℓ = 1 [26,32] have been extended to ℓ ≥ 2 according to [53,54], with k the wavenumber, h one of the two scalar modes in this gauge, θ b the divergence of baryon fluid, σ T the Thomson scattering cross section referring to collision between the photon and baryon fluid before recombination, and F ℓ and G ℓ defined in [26,31,53]. Without the decay terms in Eq.(9) one returns to the well-known standard continuity and Euler equations of photon fluid. Note that Γ ϕ in Eqs.(8)- (9) has been assumed to be dominated by the decay of ϕ → γγ such as in the situation with m ϕ < 2m ℓ and the hierarchical couplings. If not so, a branching ratio should be properly taken into account. Effects on cosmological observables We focus on the effects of the late-time DM decay on the CMB power spectrum C XY ℓ with X, Y = {T, E} and the DM power spectrum P (k). This topic has been studied in various contexts such as DM decaying to photons/dark radiation [26][27][28][29][30][31][32][33][34][35][36][37][38][39] under certain assumptions. Especially, in these studies the decaying DM energy density and the decay width have to be considered as two independent input parameters for CLASS [55,56] to solve Eq.(9). As emphasized above, they are actually correlated to each other in an explicit model e.g., in terms of the fundamental model parameters m ϕ and λ in our case. Therefore, we will directly use the model parameters as the inputs by embedding micrOMEGAs5.0 into CLASS. with r s (z) = ∞ z c s dz ′ /H(z ′ ) the sound horizon and D A (z) = z 0 dz ′ /H(z ′ ) the angular distance, implies that a smaller Ω cdm after recombination requires a larger Ω Λ , where z * is the redshift at recombination. This leads to an enhanced Integrated Sachs-Wolfe effect on C T T ℓ at large scales (small ℓ regions) similar to the case of DM decay to dark radiation [26]. As seen in Fig.3, the deviations in C T T ℓ relative to ΛCDM are up to ∼ 12% in the low ℓ region for the decaying DM component with τ ϕ ∼ 5 − 10t 0 and f ϕ ∼ 10%. These results are qualitatively consistent with [32] despite the fact that different best-fit values were used. Likewise, there is an enhanced Sachs-Wolfe effect on C T T ℓ at small scales (high ℓ regions) because of the additional radiation due to DM decay. Finally, the DM decay gives rise to sub-dominant effects on CMB polarization [35]. • For P (k) a larger Ω Λ suppresses the growth factor D(a) at small scales (large k regions). This is clearly seen in Fig.4, where the deviations in P (k) relative to ΛCDM are up to ∼ 0.5% in the large k regions for τ ϕ ∼ 5 − 10t 0 and f ϕ ∼ 10%, which is consistent with the results of [26]. The effect on P (k) is instead enhanced at large scales (small k regions), since P (k) is proportional to Ω −2 cdm at scales k ≤ k eq with the subscript "eq" referring to the time of matter-radiation equality. The suppression on P (k) in the large k regions leads to a relatively mild suppression on σ 8 which is the value of σ R for z = 0 and R = 8h −1 Mpc with the definition [57] σ where W R (k) is the Fourier transform of window function. As noted in [26], the enhancement on C T T ℓ in low ℓ regions and the suppression on P (k) in large k regions can be compensated by a larger N eff simultaneously, which implies certain degree of degeneracy between N eff and the DM parameters. Compared to the value of H 0 = (73.20 ± 1.30) km s −1 Mpc −1 at 68% CL reported by the local experiments [41], the significance of Hubble tension is now of order ∼ 3.8(3.0)σ. Without imposing the BAO data set, the ability of relaxing the H 0 tension can be more obviously enhanced. Because BAO data strongly constrains the value of H 0 r s , which implies that an increase in r s due to the late-time DM decay requires a reduction in H 0 . In Fig.5 the resolution of Hubble tension favors the parameter regions of λ ∼ 10 −10 and m ϕ ∼ 1 − 10 keV, which point to a percent level of f ϕ and τ ϕ ∼ 10 t 0 . Table 1: The best-fit values of the cosmological parameters in our DM model within the hierarchical coupling scenario with respect to the CMB + BAO(+LSS)+Pantheon data sets, which lead to ∆χ 2 = −1.3 (-1.5) relative to the ΛCDM model by following the χ 2 criteria in [44]. Apart from the H 0 tension, the MCMC fit to Planck 2018 TT+TE+EE+lowℓ+lensing + BAO+ LSS+Pantheon data sets in Table 1 is able to give a smaller best-fit value compared to σ 8 = 0.812 [59] in the Planck 2018 ΛCDM. Referring to the value of σ 8 = 0.75 ± 0.03 at 68% CL reported by the LSS [66], this result verifies that the σ 8 tension between the Planck and LSS data can be mildly reduced in our model as a byproduct. Stellar cooling constraints The direct couplings of ϕ to the SM charged leptons with m ϕ of order keV in our model enable ϕ to be produced in stellar systems [45] such as the Sun, red giants (RGs) and white dwarfs (WDs). 3 Each of these astrophysical objects provides a local thermal bath with a characteristic temperature. Since m ϕ is comparable with the characteristic temperatures of these stellar systems, a large number of ϕ particles are produced without a Boltzmann suppression, which contributes to a new form of stellar energy loss after they escape the core of the stellar system as a result of the feeble interactions with the thermal bath therein. Such new stellar cooling allows us to place constraints on the feeble interactions far stronger than in ground-based experiments. Universal coupling scenario Let us firstly consider the main production for ϕ in the universal coupling scenario. In this situation the coupling of ϕ to electrons plays a key role, which suggests that • the electron-positron inverse decay eē → ϕ • the Bremsstrahlung emission of ϕ in e − X scattering with X being nucleus, • the Compton-like scattering process eγ → eϕ, are important. The luminosity per volume in the Bremsstrahlung emission calculated by Ref. [67] implies that λ e ≤ 7 × 10 −16 for RGs, which has already excluded the parameter space preferred by the Hubble tension within this coupling scenario. Hierarchical coupling scenario Unlike in the universal coupling scenario where the Yukawa coupling λ e is the critical parameter, the dominant production for ϕ in the hierarchical coupling scenario is instead determined by the Yukawa coupling λ τ . It gives rise to an effective coupling of ϕ to di-photons, implying that • the photon inverse decay γγ → ϕ, • the Primakoff process Xγ → Xϕ with X being nucleus or electron, • the Bremsstrahlung emission of ϕ in X + Y → X + Y + ϕ + γ with X and Y being nucleus Table 2: Stellar parameters [72] for the Sun, RGs and WDs, with T c the stellar core temperature, n e the number density of electron, R the radius, and L the stellar cooling limit. are the main processes. The Feynman diagrams with respect to these processes are shown in Fig.6. (a). In the photon inverse decay process, we follow the treatment on majoron (J) production via the neutrino inverse decay processνν → J [68][69][70][71]. The luminosity per volume is given by with where p 1 = (E 1 , p 1 ) and p 2 = (E 2 , p 2 ) are the momenta of the two incoming photons, p ϕ = (E ϕ , p ϕ ) is the momentum of the outgoing ϕ, M a is the annihilation amplitude, and f (E) = (e E/T − 1) −1 is the Bose-Einstein distribution function. (b). In the the Primakoff process, we refer to a new scalar production via its coupling to photons [72]. The luminosity per volume is with where Z X is the atomic number of the nucleus, k γ = (E γ , k γ ) is the momentum of the incoming photon, k s is the screening scale [72], σ p is the scattering cross section, and f is the Bose-Einstein distribution function. (c). The Bremsstrahlung emission of ϕ mimics the DM production through dark photon [73]. The luminosity per volume reads as with where | M | 2 np [74] is the squared amplitude for the process with no bremsstrahlung, p 1 and p 2 are the momenta of the incoming nucleus with energy E 1 and E 2 respectively, p 3 and p 4 are the momenta of the outgoing nucleus, while k γ , k and p ϕ = (E ϕ , p ϕ ) are the momenta of the outgoing photon, off-shell photon and ϕ respectively. Here, f (E) is the Fermi-Dirac distribution function. In this process it is valid to treat the nucleus as non-relativistic particles. Comparing the new emission rates in Eq. (14), Eq.(16) and Eq. (18) to the stellar cooling data of the Sun, RGs, and WDs as shown in Table.2, we place constraints on the parameter space with respect to reducing the Hubble tension in Fig.7, where we adopt the criteria that the new luminosity is less than ∼ 0.03L ⊙ , ∼ 2.8L ⊙ and ∼ 0.03L ⊙ in the Sun, RGs, WDs respectively, with L ⊙ the luminosity of Sun. Here, we explicitly show the limits from (a) photon inverse decay in dot and (b) Primakoff emission in dashed whereas the Sun, RGs and WDs are referred to as black, gray and brown respectively. Given a stellar the process (c) gives much weaker constraints compared to the processes (a) and (b), which has been neglected in Fig.7. It turns out the Primakoff emission of ϕ in the WDs offers the most stringent limit. Unlike the Sun whose luminosity is fixed, the luminosities of WDs have a large uncertainty which spans a few orders of magnitude. If one adjusts L to be lower than the reference value in Table.2 by two orders, the upper bound on λ in Fig.7 will be lowered by one order, suggesting that the parameter space is nearly excluded. In this sense, our model can be tested by the future observations of WDs made by SDSS [75,76] and Gaia [77,78]. The derived stellar cooling limits in Fig.7 are subject to uncertainties due to a few simplifications -neither the polarization effect of photon in the medium nor the dependence of the stellar parameters on the radius has been taken into account. Conclusion In this paper we have studied a new type of freeze-in DM model through the SM lepton portal. Given the feeble interactions, such DM is produced from the SM thermal bath via the freeze-in mechanism, then decays to photons in the late-time Universe with its lifetime of order the age of Universe. Therefore, our model serves as a realistic realization of DM with late-time decay which alleviates the Hubble tension. Based on the MCMC analysis on this model with the hierarchical coupling scenario, we have shown the best-fit value of H 0 = 68.31(69.34) km s −1 Mpc −1 with respect to Planck 2018+BAO(+LSS)+Pantheon data sets in the parameter regions referring to f ϕ ∼ 1% and τ ϕ ∼ 10 t 0 , which suggests the significance of Hubble tension of order ∼ 3.8(3.0)σ. We have also used the complimentary stellar cooling data to set stringent constraints on the parameter space with respect to the Hubble tension. The analysis is highly unlikely unless one specifies the DM model as we do. We have shown that while the universal coupling scenario has been excluded, according to our quantitative analysis on the photon inverse decay, Primakoff emission, and Bremsstrahlung emission of ϕ in the representative stellar systems the hierarchical coupling scenario can be tested by the future observations of WDs made by SDSS and Gaia. Finally, we emphasize two points left for future work. First, one can use the Lyman-α data [50][51][52] to constrain this DM model, after taking into account the fact that ϕ is only a subdominant fraction of the observed DM. The second point is that besides the future observations of WDs X-ray telescopes may be also used to test this DM component. Having said so ϕ cannot be used to address the 3.5 keV X-ray reported by [79,80].
2023-04-07T06:42:06.533Z
2023-04-06T00:00:00.000
{ "year": 2023, "sha1": "76db82c18358a1619b757c0e6361e416db8347d7", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "76db82c18358a1619b757c0e6361e416db8347d7", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
246864208
pes2o/s2orc
v3-fos-license
Acute Disseminated Encephalomyelitis Presenting as Bilateral Ptosis in a Sri Lankan Child Introduction Acute disseminated encephalomyelitis is a rare inflammatory demyelinating disease characterized by acute onset polyfocal neurological deficits associated with encephalopathy. It commonly presents with fever, meningism, seizures, ataxia, motor deficits, and bladder dysfunction. Although cranial neuropathies, including optic neuritis and facial nerve palsies, have previously been reported, children presenting with bilateral ptosis is extremely rare. Here, we report a 3-year-old child with acute disseminated encephalomyelitis presenting with acute onset bilateral ptosis due to involvement of the single central levator subnucleus of the oculomotor nerve. Case Presentation. A 3-year-old Sri Lankan boy presented with drooping of the upper eyelids for three days and unsteady gait for two days. He did not have seizures, blurring of vision, limb weakness, swallowing or breathing difficulties, or bladder dysfunction. On examination, he had bilateral ptosis, gait ataxia, and dysmetria. His vision, eye movements, and examination of other cranial nerves were normal. MRI brain revealed high signal intensities involving the subcortical white matter of parietal and occipital lobes, midbrain in the area of single central levator subnucleus of the oculomotor nerve, cerebellar vermis, and right cerebellar hemisphere. Based on the clinical features suggesting polyfocal neurological involvement of the midbrain and cerebellum and characteristic MRI findings, the diagnosis of acute disseminated encephalomyelitis was made. He responded well and rapidly to high-dose intravenous methylprednisolone and showed a complete clinical and radiological recovery. Conclusion This case report describes a rare presentation of acute disseminated encephalomyelitis, bilateral ptosis due to involvement of the single central levator subnucleus of the oculomotor nerve. It highlights that the presenting manifestations of acute disseminated encephalomyelitis can be subtle and vary; however, timely diagnosis and treatment result in complete recovery. Introduction Acute disseminated encephalomyelitis (ADEM) is a rare inflammatory demyelinating disease of the central nervous system characterized by acute onset polyfocal neurological deficits associated with encephalopathy [1]. e common presentations include fever, headache, meningism, seizures, ataxia, motor and sensory deficits in limbs, and bladder dysfunction due to myelopathy [2]. Cranial neuropathies, including optic neuritis and facial nerve palsies, are reported; however, bilateral ptosis is rare. Here, we report a 3-year-old child with ADEM presenting with acute onset bilateral ptosis due to involvement of the single central levator subnucleus of the oculomotor nerve. Case presentation A 3-year-old previously healthy Sri Lankan boy presented with drooping of the upper eyelids for three days and unsteady gait with sleepiness and reduced activity for two days. He did not have seizures, blurring of vision, limb weakness, swallowing or breathing difficulties, or bladder dysfunction. He was born to nonconsanguineous, healthy parents, had a normal perinatal period, and was developmentally age-appropriate. ere was no recent history of vaccination, febrile illnesses, snake bite, or behavioural changes before the onset of symptoms. On examination, he was conscious and responsive with a GCS of 15/15. He had bilateral ptosis; however, his vision, eye movements, size of the pupils, pupillary light reflex, fundoscopy, and examination of other cranial nerves were normal. He walked with a broad-based ataxic gait and had dysmetria with positive finger nose test bilaterally. He did not have dysarthria, nystagmus, involuntary movements, or Romberg sign. Limb examination revealed normal tone and grade five muscle power in all four limbs and normal upper limb reflexes. e knee and ankle jerks were easily elicitable, and plantar responses were flexor bilaterally. ere was no muscle fatiguability, sensory involvement, or signs of meningism. e examination of cardiovascular and respiratory systems was normal. His complete blood count, C-reactive protein, serum electrolytes, and calcium were normal, and the ESR was 22 mm/1 st hour. e cerebrospinal fluid (CSF) examination revealed lymphocytic pleocytosis (polymorphs 2/mm 3 and lymphocytes 20/mm 3 ) with elevated proteins (55 mg/ dL); however, the CSF was negative for oligoclonal bands. EEG and noncontrast CT brain were normal. MRI brain revealed high signal intensities involving the subcortical white matter of parietal and occipital lobes, midbrain, cerebellar vermis, right cerebellar hemisphere, and right caudate and lentiform nuclei without diffusion restriction or contrast enhancement (Figure 1). Myelin oligodendrocyte glycoprotein (MAG) antibody test was not performed due to unavailability. Based on the clinical features suggesting polyfocal neurological involvement of the midbrain and cerebellum and characteristic MRI findings, the diagnosis of ADEM was made. He was treated with high dose (30 mg/kg) intravenous methylprednisolone for seven days and tapering course of oral steroids for four weeks. He demonstrated a gradual improvement in ptosis and ataxia and had complete clinical recovery after four weeks. e MRI brain performed after three months showed a marked reduction in the white matter and brainstem signal intensity abnormalities with minimal residual changes ( Figure 2). Discussion ADEM is a rare acute demyelinating disorder of the central nervous system which has an annual incidence of 0.2-0.4 per 100,000 children [3]. It is characterized by multifocal white matter involvement in the brain and spinal cord [2]. e existing evidence suggests that ADEM results from a transient autoimmune response against myelin or other autoantigens, through molecular mimicry or by nonspecific activation of autoreactive T cell clone [4]. Although the disease can occur at any age, it is commonly reported in children aged between 5 and 8 years with a slight male preponderance [5]. e common neurological manifestations of ADEM are altered sensorium, meningism, seizures, quadriplegia, paraplegia, bladder involvement, dystonia, choreiform movements, nystagmus, ataxia, dysarthria, and neuropsychiatric manifestations [6]. Multiple cranial nerve involvement is also reported. e facial nerve is the most common cranial nerve to be involved. In addition, diplopia and ophthalmoplegia are reported as rare, presenting features of ADEM [7]. e most unusual feature of our patient is that he presented with bilateral ptosis without ophthalmoplegia or other cranial nerve palsies. e pathophysiological mechanisms causing ptosis include lesions of the oculomotor nerve Case Reports in Pediatrics and its nucleus and autonomic disturbance due to Horner syndrome. Isolated bilateral complete ptosis without paralysis of external ocular muscles could only be explained by the involvement of central levator subnucleus of the oculomotor nerve. is is because the innervation of both eyelids is through nerve fibres originating from this single central brainstem subnucleus. e MRI findings of brainstem involvement at the site of this subnucleus in our child confirm the causal relationships between demyelination and clinical features. e differential diagnoses of our patient were autoimmune encephalitis, neuromyelitis optica, and the first episode of multiple sclerosis. e short duration of the illness before admission, absence of areflexia and motor weakness, simultaneous widespread multifocal involvement on MRI brain, and the dramatic response to steroids favour the diagnosis of ADEM. CSF studies in ADEM are often normal or can exhibit pleocytosis with lymphocytic predominance with elevated protein level. However, true positivity for oligoclonal bands is rare [8]. In conclusion, we report a child with ADEM presenting with bilateral ptosis, which is an unusual presentation of the disease. is case report highlights presenting manifestations of ADEM can be subtle and vary; however, timely diagnosis and treatment result in complete and rapid recovery. Data Availability All data relevant to the case are included in the case report. Conflicts of Interest Authors declare no conflicts of interest. is case report was done as part of employment at the University of Kelaniya and Colombo North Teaching Hospital, Ragama.
2022-02-17T05:13:59.250Z
2022-02-08T00:00:00.000
{ "year": 2022, "sha1": "4db10469542b29185a7eaf802d2b06d42da777bd", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1155/2022/5492155", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4db10469542b29185a7eaf802d2b06d42da777bd", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
54723149
pes2o/s2orc
v3-fos-license
Computational Methods for Coupled Fluid-Structure-Electromagnetic Interaction Models with Applications to Biomechanics Multiphysics problems arise naturally in several engineering and medical applications which often require the solution to coupled processes, which is still a challenging problem in computational sciences and engineering. Some examples include blood flow through an arterial wall and magnetic targeted drug delivery systems. For these, geometric changes may lead to a transient phase in which the structure, flow field, and electromagnetic field interact in a highly nonlinear fashion. In this paper, we consider the computational modeling and simulation of a biomedical application, which concerns the fluid-structure-electromagnetic interaction in themagnetic targeted drug delivery process. Our study indicates that the strongmagnetic fields, which aid in targeted drug delivery, can impact not only fluid (blood) circulation but also the displacement of arterial walls. A major contribution of this paper is modeling the interactions between these three components, which previously received little to no attention in the scientific and engineering community. Introduction In the last decade, the rapid development of computational science has provided new methodologies to solve complex multiphysics applications involving fluid-structure interaction to a variety of fields.These include solving applications involving blood flow interactions with the arterial wall to computational aeroelasticity of flexible wing micro-air vehicles to magnetohydrodynamic of liquid-metal cooled nuclear reactor to ferromagnetics with biological applications.In these applications, the challenge is to understand and develop algorithms that allow the structural deformation, the flow field, and temperature variations to interact in a highly nonlinear fashion. Coupling these multiphysics with electromagnetic effects makes the associated computational model too complex.Not only is the nonlinearity in the geometry challenging but in many of these applications the material is nonlinear as well, which makes the problem even more complex.Direct numerical solution of the highly nonlinear equations governing even the most simplified two-dimensional models of such multiphysics interaction requires that all the unknown fields, such as fluid velocity, pressure, the magnetic and the electric field, the temperature field, and the domain shape, be determined as part of the solution, since neither is known a priori. The past few decades, however, have seen significant advances in the development of finite element and domain decomposition methods.These have provided new algorithms for solving such large scale multiphysics simulations.There have been several methods that have been introduced in this regard and their performance has been analyzed for a variety of problems.One such technique is the mortar finite element method which has been shown to be stable mathematically and has been successfully applied to a variety of applications and references therein.The basic idea is to replace the strong continuity condition at the interfaces between the different subdomains modeling different 2 Mathematical Problems in Engineering multiphysics by a weaker one to solve the problem in a coupled fashion.Such novel techniques provide hope for us to develop new faster and efficient algorithms to solve complex multiphysics applications.A variety of methods have been introduced including the level set methods [1], fictitious domain methods [2,3], nonconforming hp finite element methods [4,5], multilevel multigrid methods [6], and the immersed boundary methods [7].While these methods help enhance our ability to understand complex processes, there is still a great need for efficient computational methods that cannot only help simulate physiologically realistic situations qualitatively but also analyze and study modeling of such processes quantitatively.Such multiphysics applications involve the interaction of various components, such as fluid with the structure, electromagnetics with the fluid, or fluid-structure interacting completely with electromagnetics. Electromagnetic-Fluid Interaction.An important application involving interaction of electromagnetics with fluid which describes the behavior of electrically conducting fluid is very complex under a magnetic field, since the additional Lorentz force is caused by the interaction between velocity field and electromagnetic field.Understanding such coupled behavior not only helps us to create efficient algorithms but also applies to a variety of magnetohydrodynamic (MHD) applications.Due to its multidisciplinary applications, a solid understanding of the MHD is required.In this regard, the Hartmann flow has been studied extensively.The Hartmann flow is the steady flow of an electrically conducting fluid between two parallel walls, under the effect of a normal magnetic and electric field.A thorough understanding of such models for electromagnetic fluid interaction can help us in developing new techniques for complex problems such as magnetic drug targeting in cancer therapy.Such a model would involve ferrohydrodynamics of blood that helps to study external magnetic field and its interaction with blood flow containing a magnetic carrier substance.The analytic models would involve solving Maxwell's equations in conjunction with Navier-Stokes equations.While new models in this area are just starting to evolve, these often consider the structure to be fixed.There is a need to extend these models to include fluid-structure interaction with electromagnetics, which would be another focus of this work. Proposed New Models. In this paper, we will develop a computational infrastructure for solving coupled fluidstructure interaction with electromagnetic and temperature effects.The rest of the work is organized as follows.Section 2 presents the models, methods, and background required to develop and solve the coupled multiphysics systems.In Section 3, we consider the model of a blood vessel, a permanent magnet, and surrounding tissue and air in two dimensions.We will consider both a nonmoving structure and a moving structure.The deformed structure provides a new geometry, where the Navier-Stokes equations are solved for the velocity and pressure fields in the bloodstream.A magnetic vector potential generated by the permanent magnet is calculated, which in turn creates a magnetic volume force that impacts the flow in the blood vessel.The flow field changes the displacement of the structure, and the problem is solved once again for the new geometry.The proposed models are validated against benchmark applications numerically.Section 4 presents conclusion and a discussion of the results.Future work on the proposed problems is also presented.A magnetically targeted drug delivery system [8] is based on magnetic particles under the action of an external magnetic field.This is becoming an increasingly effective approach in drug therapy.As this field has evolved in the last decade, lots of scientific interest led to this inquiry into efficient computational models that simulate this experimental process [9].Our study indicates that the strong magnetic fields which aid in targeted drug delivery can impact not only fluid (blood) circulation but also the displacement of arterial walls.Thus, it is important to have a model, which includes the interactions between fluid, structure, and magnetic field in order to study and optimize drug delivery. In this section, we will present a model that describes the interaction between these three components, which previously received little to no attention in the scientific community.To develop an electromagnetic fluid-structure interaction, we incorporate the effects of the electromagnetic field into a fluid-structure model.Gaining a thorough understanding of such a coupled model can help us to understand the efficacy of magnetic nanoparticle-based drug delivery for diseases such as cancer as has been proposed by various researchers [10,11].There is significant evidence that indicates a need for more promising models which overcome current limitations and improve magnetic targeting technique. Mathematical Model and Governing Equations The model we consider is a blood vessel with a permanent magnet near its surface, as illustrated in Figure 1.For simplicity of presentation, we consider a computational model that comprises three components.Let the computational domain Ω ⊂ R 2 be an open set with global system boundary Γ.Let Ω be decomposed into the four disjoint open sets, a fluid subdomain Ω denoted by blood flow, two solid subdomains Ω , = 1, 2 (blood vessel walls) with respective boundaries Γ and Γ , and one electromagnetic domain Ω (permanent magnet).Let Γ , = 1, 2, 3, 4 be the interface between the solid, fluid, and electromagnetic domains.The structural domain consists of two symmetric arterial vessel walls denoted by Ω 1 and Ω 2 .The electromagnetic domain consists of a permanent magnet of dimensions 10 m×40 m placed in free space.The arterial wall describes a structural mechanism that interacts with the flow dynamics of blood which in turn is impacted by a permanent magnet, which is described next. For this, we use Maxwell's equation for the magnetostatic case (the field quantities do not vary with time) that relates the magnetic field intensity H and the electric current density J [12]: The constitutive relations between B and H depend on the domain [12,13]: for the blood stream 0 H for the tissue and air, where 0 is the magnetic permeability of vacuum (V⋅s/(A⋅m)), ,mag is the relative magnetic permeability of the permanent magnet (dimensionless), B rem is the remanent magnetic flux (A/m), and M is the magnetization vector in the blood stream (A/m), which is a function of the magnetic field, H.By defining a magnetic vector potential A such that we get Assuming no perpendicular currents, we can simplify to a 2D problem and reduce this equation to This assumes that the magnetic vector potential has a nonzero component only perpendicularly to the plane, which is A = (0, 0, ).The induced magnetization M (, ) = ( , ) is characterized by [14][15][16][17]] To capture the magnetic fields of interest we can linearize these expressions to obtain where = is the magnetic susceptibility.This magnetic field induces a body force on the fluid.With the assumption that the magnetic nanoparticles in the fluid do not interact, the magnetic force F = ( , ) on the ferrofluid for relatively weak fields is given by [16] Substituting ( 2) and ( 3) in ( 8) leads to the expression where is the fraction of the fluid which is ferrofluid.The vector = ( , ) is the volume force, which is input for the Navier-Stokes equations in the next subsection. Modeling the Unsteady Blood Flow. We model the fluid domain for the blood flow via the unsteady Navier-Stokes equations for an incompressible, isothermal fluid flow written in nonconservative form as where is the velocity, is the density, is the pressure, and is the body forces.The viscous stress tensor is ( ) = 2( ), where is the dynamic viscosity and the deformation tensor is The fluid equations are subject to the boundary conditions: where = − + 2 ( ) is the prescribed tractions on the Neumann part of the boundary with being the outward unit normal vector to the boundary surface of the fluid.Conditions of displacement compatibility and force equilibrium along the structure-fluid interface are enforced.In order to solve a fluid-structure interaction problem in a coupled fashion we employ an arbitrary Lagrangian-Eulerian (ALE) formulation where the characterizing velocity is no longer the material velocity , but a grid velocity û .This allows us to replace the material velocity in (10) with the convective velocity = − û [5].The weak variational formulation of the fluid problem then becomes where is the structure displacement, is the structure density, is the solid stress tensor, and 2 / 2 is the local acceleration of the structure.This is solved with the boundary conditions: Here Γ and Γ are the respective parts of the structural boundary where the Dirichlet and Neumann boundary conditions are prescribed.Also, are the applied tractions on Γ and are the externally applied tractions to the interface boundaries Γ , = 1, 2, 3, 4. The unit outward normal vector to the boundary surface of the structure is .The stresses are computed using the constitutive relation described next.Equations (15) enforce the equilibrium of the traction between the fluid and the structure on the respective fluid-structure interfaces.The total strain tensor for a typical geometrically nonlinear model is written in terms of displacement gradients: For small deformations, the last term on the right hand side is omitted to obtain a geometrically linear model.Since the objective of this section is to investigate the influence of electromagnetic effects on fluid-structure interaction models, we will consider a geometrically linear model combined with a linear constitutive law.The solid stress tensor is given in terms of the second Piola-Kirchoff stress : For the linear material model, we employ the following constitute law relating the stress tensor to the strain tensor: where is the 4th order elasticity tensor and ":" stands for the double-dot tensor product. 0 and 0 are initial stresses and strains, respectively.The weak variational form of the structural equations then becomes the following: find the structure displacement such that Numerical Results In this section, we present the numerical results for the electromagnetic-fluid-structure interaction model problem presented in this section.To understand the effects of the coupling between electromagnetic field and fluid-structure interaction models better, we first consider the interaction with a rigid structure, which is often employed in the most research problems that are only interested in studying the electromagnetic-fluid interaction.The computational domain (see Figure 2) represents a blood vessel that is 300 micrometers long and 100 micrometers in diameter, with walls 20 micrometers in thickness.All the results presented are for three magnetic fields: 0 T (no magnetic field), 0.5 T, and 1 T. The structure model we consider is linear (MLGL), which was introduced in Section 2. stress along with streamlines of spatial velocity field and the -component of the magnetic vector potential.While there is no significant impact of increasing the magnetic field on the velocity profile in each of the graphs in Figures 3(a), 4(a), and 5(a), the impact on the magnetic vector potential is as expected.As it can be seen, the -component of the magnetic potential doubles when magnetic field doubles.pressure is impacted by increasing the magnetic field and the doubling effect is also seen as expected. Coupled Interaction with Moving Structure.Next, we consider the benchmark problem presented with the structure moving.For this, we employ the ALE formulation for the fluid-structure interaction as described in Section 2. We notice from Figures 3(b), 4(b), and 5(b) that, at = 0.215 (when the fluid velocity has maximum value), the structure and the flow pattern are not very much impacted by the magnet.For the maximum studied magnetic field of 1 T, the arterial wall is slightly bent towards the magnet.For even larger magnetic fields not shown in the picture (the order of magnetic field of 5 T), the magnet intersects with the arterial wall. Even though we have not seen a big difference in structural deformation and fluid flow for our study case, the fluid pressure is entirely different between two considered magnetic fields (see Figures 6(b), 7(b), and 8(b)).If for B rem = 0 T the pressure is completely symmetric with respect to the -axis, the pressure around the magnet increases when magnetization is 0.5 T and becomes more than double the maximum pressure in the rest of the fluid when B rem = 1 T. Another experiment we perform is to measure the velocity profile and displacement of two specific points.From Figures 9 and 10, we notice that, as expected, the velocity and pressure decrease at the center and increase around the boundaries when the structure is moving, mainly because of the dilatation of the structure.While the pressure in the center is not affected much by the presence or absence of magnetic field, near the magnet the pressure is steadily increasing with the time. For the measured displacement, we notice in Figure 11 that the wall towards the magnet is getting closer to the magnet because of the increasing pressure, while the other wall is virtually unaffected by the presence of the magnetic field. Conclusion In this work, we presented the computational modeling and simulation of coupled multiphysics applications.These included a variety of processes such as fluid dynamics, structural mechanics, and electromagnetic interaction that impacted the behavior of the physical system in a coupled way.Specifically, this work considered the research question of "how does incorporating electromagnetic field into fluidstructure interaction models influence the fluid flow and structural deformation?"In answering this question, this work led to the development of a two-dimensional multiphysics problem involving electromagnetics coupled with fluid-structure interaction. In order to answer this research question, we first presented the mathematical background and simulation of the interaction between fluid, structure, and magnetic field.The motivation of this came from researching models for targeted drug delivery for delivering drugs in human body, to increase the concentration of the drug in the target area.For example, the chemotherapy drug dosage is limited by the negative impact on the drugs on the healthy cells.By delivering the drugs with high accuracy and maximum concentration to specific areas of the body, it is possible to increase local dosage of the drug on the tumor, with lower concentration in the rest of the body.The drug effectiveness is increased while the side effects are reduced.Other examples of the applications of magnetic drug targeting are treatment of cardiovascular conditions, such as stenosis and thrombosis.Thus, it is important to model not only the blood circulation but also the deformation of the blood vessel, in order to improve the accuracy which is the focus of the second problem in this thesis.In particular, it is important to have an accurate model of the interaction between the three components for optimizing the shape, size, and magnetic power, in order to deliver the drugs efficiently in the desired place and minimize the side effects.Our results from this work clearly indicate the importance of the magnetic field to be coupled with a fluid-structure interaction model.More importantly, the results suggest the importance of using moving walls versus nonmoving walls in this coupled electromagnetic fluid-structure interaction. While this work provided a lot of insight into the importance of electromagnetic effects in fluid-structure interaction, there is scope to enhance this work by considering effects of non-Newtonian rheological properties incorporated along with the extension to materially and geometrically nonlinear models.In the last two decades, collagenous soft tissues have been found to exhibit viscoelastic behavior, which includes time-dependent creep and stress relaxation, ratedependence, and hysteresis in a loading cycle.As suggested in [18], this hysteresis is less sensitive than the stiffness to the loading rate, and this phenomenon is generally found in soft tissues and elastomers [18].One of the future directions would be to extend the structural mechanics module to incorporate viscoelasticity and then study the influence of this on our models.The computational models in this work included two-dimensional models for simplicity, but our models can be naturally extended to three dimensions.With increasing the size of the problem comes the need for more computational resources.There is intensive work that is evolving in the area of domain decomposition that helps to address how to solve coupled multiphysics problems efficiently.So as the problem dimension becomes bigger, one must also resort to domain decomposition type approaches which can then open up more venues on parallelization of the algorithms that have been developed. Figure 2 : Figure 2: Domain and points of interest. Rigid Structure.Figures 3(a), 4(a), and 5(a) illustrate the influence of the magnetic field on the interaction.These figures show the surface von Mises Figures 6 ( 6 Mathematical a), 7(a), and 8(a) compare the effect of varying the magnetic field on the surface pressure.Unlike the impact on the velocity profile, these figures suggest that the surface Figure 9 :Figure 10 : Figure 9: Velocity for a center and edge point inside the fluid. ) 2.2.Modeling the Structure Equations.The structural domains for the blood vessel walls consist of the arterial vessel walls denoted by Ω 1 , Ω 2 .They are modeled via the following equation:
2018-12-16T07:53:48.013Z
2015-06-23T00:00:00.000
{ "year": 2015, "sha1": "aaf0f3fa2435e42660f99c4dc04fac1de3e64bd2", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/mpe/2015/253179.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "aaf0f3fa2435e42660f99c4dc04fac1de3e64bd2", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "extfieldsofstudy": [ "Engineering" ] }
6611707
pes2o/s2orc
v3-fos-license
A homozygous STIM1 mutation impairs store-operated calcium entry and natural killer cell effector function without clinical immunodeficiency To the Editor: Stromal interaction molecule 1 (STIM1) is a transmembrane protein pivotal to store-operated calcium entry (SOCE) that localizes to either the cell or endoplasmic reticulum (ER) membranes, with the N-terminus in either the extracellular space or the ER, respectively. Plasma membrane ORAI calcium release–activated calcium modulator 1 (ORAI1) Ca2+ channels are activated by STIM1. Families previously described with recessive STIM1 mutations (MIM #612783) had life-threatening viral, bacterial, and fungal infections; developmental myopathy; hypohidrosis; and amelogenesis imperfecta (AI; generalized developmental enamel abnormalities).1, 2, 3 We investigated a consanguineous family, segregating a novel syndrome of recessive AI and hypohidrosis by using autozygosity mapping and clonal sequencing. A homozygous rare missense mutation in STIM1 (p.L74P) in the EF-hand domain was identified (see the Methods and Results sections in this article's Online Repository at www.jacionline.org). The family was re-evaluated, with particular attention paid to features associated with recessive STIM1 mutations (Table I and see Table E1, Table E2, Table E3 in this article's Online Repository at www.jacionline.org). The 2 affected cousins (18 and 11 years old, respectively) did not have overt clinical immunodeficiency. Further evaluation of their immune systems showed a normal immunoglobulin profile with an adequate specific antibody response to both nonlive (pneumococcus, tetanus and, Hib) and live (mumps, measles, and rubella) vaccinations. In addition, both subjects had detectable IgG against varicella zoster virus after a previous uncomplicated primary infection. The younger cousin was also found to have IgG against EBV viral capsid antigen, suggesting previous exposure, but neither showed any evidence of acute infection or previous exposure to cytomegalovirus. Table I Summary of the main clinical and clinical immunologic features in subjects with either homozygous or heterozygous STIM1 c.221T>C mutations Lymphocyte studies showed stable CD8 T-cell depletion in the older affected subject only. Other lymphocyte subsets, including CD4 T, natural killer (NK), and B cells, were within the normal range (Table I). However, despite normal PHA and anti-CD3 simulation responses, T-lymphocyte and NK cell SOCE was grossly abnormal, which is consistent with disruption of the Ca2+-binding EF-hand and in keeping with previous reports for recessive STIM1 mutations (see Fig E1, A, in this article's Online Repository at www.jacionline.org).1, 2, 3 The defect in NK cell SOCE was associated with impaired NK cell effector function, as shown by assays of granule exocytosis and intracellular IFN-γ production in response to K562 tumor cells (see Fig E1, B). After recently published mouse studies, which confirmed the importance of STIM1 to neutrophil SOCE and associated functions,4 we also evaluated neutrophil function. This was found to be within normal limits. Fig E1 Defective SOCE and impaired NK cell function in STIM1-Leu74Pro patients' cells. A, Calcium flux in lymphocytes after anti-CD3/anti-CD16, 1 μmol/L thapsigargin, or 500 nmol/L ionomycin administration. B, Granule exocytosis ... Despite abnormal immune system SOCE, the affected subjects in this case appear to be able to compensate for this deficit and avoid overt immunodeficiency. It is possible that the relative preservation of T-cell function might compensate for NK cell dysfunction. Neither might yet have encountered a pathogen that would expose this particular immune system limitation (see Table E2). An ability to mount a partial response to viral infections was reported in a family with clinical immunodeficiency and a history of viral infections caused by a homozygous missense R429C change affecting the STIM1 cytoplasmic domain.2 A mouse model characterized by conditional knockout of Stim1 and Stim2 in both CD4+ and CD8+ T cells has recently provided further insight into the importance of Stim1 in immune system development and virus-specific memory and recall responses, which prevent acute viral infections from becoming chronic.5 Recessive STIM1 mutations can be associated with other immune dysregulations, including autoimmune disease. The older cousin had a transient episode of idiopathic thrombocytopenic purpura when 2 years old that might have been unrelated to the STIM1 mutations. There were no other clinical or serologic markers consistent with autoimmune disease, and regulatory T-cell numbers were normal. Both cousins were intolerant of warm environments and aware of their inability to sweat normally. This limited the older cousin's ability to participate in sport. There was no clinical or serologic evidence of myopathy. This is in contrast to other recessive STIM1 mutations and also to dominant STIM1 mutations affecting the EF-hand that cause tubular aggregate myopathy (MIM #160565).6 Hypomineralized AI affected the primary and secondary dentitions of both affected cousins (see Fig E2 in this article's Online Repository at www.jacionline.org), which is in keeping with reports of other recessive STIM1 mutations. The cousins were physically small (height, weight, and head circumference <0.4th percentile) when assessed at 18 years and 9 years, 10 months of age, respectively. Without comparable data from other subjects with recessive STIM1 mutations, it is unclear whether this is a cosegregating feature. Fig E2 Hypomineralized AI as the presenting feature in a family with STIM1 L74P change. A, Pedigree of the consanguineous family investigated. The 2 affected cousins with AI and hypohidrosis are shaded black. Genotypes of the c.221T>C variant are indicated ... The L74P STIM1 change within the EF-hand domain precedes the first Ca2+-binding aspartate residue by 2 amino acids (see Fig E2) and therefore might be expected to distort the Ca2+-binding region of the protein. Therefore we compared the response of mutant YFP-STIM1 (L74P) with the depletion of Ca2+ stores after thapsigargin or cyclopiazonic acid (CPA) treatment with that of wild-type YFP-STIM1 and the previously published EF-hand mutant7 YFP-STIM1 (D76A, see Fig E3 in this article's Online Repository at www.jacionline.org). Fig E3 STIM1 localization and Ca2+ flux in cells transfected with STIM1 constructs. A, TIRFM of HEK293 cells transfected with either wild-type (WT), D76 A mutant, or L74P mutant YFP-STIM1 after treatment with 2 μmol/L thapsigargin ... Using total internal reflection fluorescence microscopy (TIRFM), we replicated previous observations that wild-type YFP-STIM1 relocalizes to puncta proximal to the plasma membrane after treatment of transfected HEK293 cells with 2 μmol/L thapsigargin to deplete ER Ca2+ stores through sarcoendoplasmic reticulum calcium transport ATPase (SERCA) inhibition (see Fig E3, A). The EF-hand mutant YFP-STIM1 (D76 A) was present in these puncta before thapsigargin treatment, with no observable response to thapsigargin (see Fig E3, A). Similarly, mutant YFP-STIM1 (L74P) showed no response to thapsigargin but also appeared to form constitutive puncta, which was less distinct in appearance than that for the D76A mutant (see Fig E3). We compared Ca2+ fluctuations in HEK293 cells transfected with ORAI-CFP and either wild-type YFP-STIM1, mutant YFP-STIM1 (D76A), or mutant YFP-STIM1 (L74P; see Fig E3, B and C). Both YFP-STIM1 (D76A) and YFP-STIM1 (L74P) transfected cells had increased basal Ca2+ concentrations compared with wild-type YFP-STIM1 and reduced peak and integral responses to CPA-induced SERCA inhibition (see Fig E3, B and C). However, in contrast to the EF-hand mutant YFP-STIM1 (D76A), YFP-STIM1 (L74P) did not demonstrate reduced SOCE after CPA washout and Ca2+ restoration, suggesting that the previously reported desensitization of SOCE observed with the YFP-STIM1 (D76A) mutant does not occur with the YFP-STIM1 (L74P) mutant form. Therefore the L74P mutation appears to result in a distinct molecular phenotype compared with the loss of function observed in immunodeficient patients and the constitutive activation observed in patients with myopathy. This study is the first to report recessive STIM1 mutations in patients presenting with AI and hypohidrosis without overt clinical immunodeficiency or myopathy. Clinical immunologic investigations were consistent with abnormal NK cell and T-lymphocyte function that might be expected to be associated with ongoing clinical immunodeficiency. However, despite severely abnormal SOCE, this was not the case in these patients. Missense mutations affecting the EF-hand can have very different clinical phenotypes with respect to the immune system, muscle, sweating, and enamel formation. This has important implications for clinical evaluation, as well as understanding the biological functions of STIM1. A homozygous STIM1 mutation impairs store-operated calcium entry and natural killer cell effector function without clinical immunodeficiency To the Editor: Stromal interaction molecule 1 (STIM1) is a transmembrane protein pivotal to store-operated calcium entry (SOCE) that localizes to either the cell or endoplasmic reticulum (ER) membranes, with the N-terminus in either the extracellular space or the ER, respectively. Plasma membrane ORAI calcium release-activated calcium modulator 1 (ORAI1) Ca 21 channels are activated by STIM1. Families previously described with recessive STIM1 mutations (MIM #612783) had life-threatening viral, bacterial, and fungal infections; developmental myopathy; hypohidrosis; and amelogenesis imperfecta (AI; generalized developmental enamel abnormalities). [1][2][3] We investigated a consanguineous family, segregating a novel syndrome of recessive AI and hypohidrosis by using autozygosity mapping and clonal sequencing. A homozygous rare missense mutation in STIM1 (p.L74P) in the EF-hand domain was identified (see the Methods and Results sections in this article's Online Repository at www.jacionline.org). The family was re-evaluated, with particular attention paid to features associated with recessive STIM1 mutations (Table I and see Tables E1-E3 in this article's Online Repository at www.jacionline. org). The 2 affected cousins (18 and 11 years old, respectively) did not have overt clinical immunodeficiency. Further evaluation of their immune systems showed a normal immunoglobulin profile with an adequate specific antibody response to both nonlive (pneumococcus, tetanus and, Hib) and live (mumps, measles, and rubella) vaccinations. In addition, both subjects had detectable IgG against varicella zoster virus after a previous uncomplicated primary infection. The younger cousin was also found to have IgG against EBV viral capsid antigen, suggesting previous exposure, but neither showed any evidence of acute infection or previous exposure to cytomegalovirus. Lymphocyte studies showed stable CD8 T-cell depletion in the older affected subject only. Other lymphocyte subsets, including CD4 T, natural killer (NK), and B cells, were within the normal range (Table I). However, despite normal PHA and anti-CD3 simulation responses, T-lymphocyte and NK cell SOCE was grossly abnormal, which is consistent with disruption of the Ca 21 -binding EF-hand and in keeping with previous reports for recessive STIM1 mutations (see Fig E1, A, in this article's Online Repository at www.jacionline.org). [1][2][3] The defect in NK cell SOCE was associated with impaired NK cell effector function, as shown by assays of granule exocytosis and intracellular IFN-g production in response to K562 tumor cells (see Fig E1, B). After recently published mouse studies, which confirmed the importance of STIM1 to neutrophil SOCE and associated functions, 4 we also evaluated neutrophil function. This was found to be within normal limits. Despite abnormal immune system SOCE, the affected subjects in this case appear to be able to compensate for this deficit and avoid overt immunodeficiency. It is possible that the relative preservation of T-cell function might compensate for NK cell dysfunction. Neither might yet have encountered a pathogen that would expose this particular immune system limitation (see Table E2). An ability to mount a partial response to viral infections was reported in a family with clinical immunodeficiency and a history of viral infections caused by a homozygous missense R429C change affecting the STIM1 cytoplasmic domain. 2 A mouse model characterized by conditional knockout of Stim1 and Stim2 in both CD4 1 and CD8 1 T cells has recently provided further insight into the importance of Stim1 in immune system development and virus-specific memory and recall responses, which prevent acute viral infections from becoming chronic. 5 Recessive STIM1 mutations can be associated with other immune dysregulations, including autoimmune disease. The older cousin had a transient episode of idiopathic thrombocytopenic purpura when 2 years old that might have been unrelated to the STIM1 mutations. There were no other clinical or serologic markers consistent with autoimmune disease, and regulatory T-cell numbers were normal. Both cousins were intolerant of warm environments and aware of their inability to sweat normally. This limited the older cousin's ability to participate in sport. There was no clinical or serologic evidence of myopathy. This is in contrast to other recessive STIM1 mutations and also to dominant STIM1 mutations affecting the EF-hand that cause tubular aggregate myopathy (MIM #160565). 6 Hypomineralized AI affected the primary and secondary dentitions of both affected cousins (see Fig E2 in this article's Online Repository at www.jacionline.org), which is in keeping with reports of other recessive STIM1 mutations. The cousins were physically small (height, weight, and head circumference <0.4th percentile) when assessed at 18 years and 9 years, 10 months of age, respectively. Without comparable data from other subjects with recessive STIM1 mutations, it is unclear whether this is a cosegregating feature. The L74P STIM1 change within the EF-hand domain precedes the first Ca 21 -binding aspartate residue by 2 amino acids (see Fig E2) and therefore might be expected to distort the Ca 21 -binding region of the protein. Therefore we compared the response of mutant YFP-STIM1 (L74P) with the depletion of Ca 21 stores after thapsigargin or cyclopiazonic acid (CPA) treatment with that of wild-type YFP-STIM1 and the previously published EF-hand mutant 7 YFP-STIM1 (D76A, see Fig E3 in this article's Online Repository at www.jacionline.org). Using total internal reflection fluorescence microscopy (TIRFM), we replicated previous observations that wild-type YFP-STIM1 relocalizes to puncta proximal to the plasma membrane after treatment of transfected HEK293 cells with 2 mmol/L thapsigargin to deplete ER Ca 21 stores through sarcoendoplasmic reticulum calcium transport ATPase (SERCA) inhibition (see Fig E3, A). The EF-hand mutant YFP-STIM1 (D76 A) was present in these puncta before thapsigargin treatment, with no observable response to thapsigargin (see Fig E3, A). Similarly, mutant YFP-STIM1 (L74P) showed no response to thapsigargin but also appeared to form constitutive puncta, which was less distinct in appearance than that for the D76A mutant (see Fig E3). We compared Ca 21 fluctuations in HEK293 cells transfected with ORAI-CFP and either wild-type YFP-STIM1, mutant YFP-STIM1 (D76A), or mutant YFP-STIM1 (L74P; see Fig E3, B and C). Both YFP-STIM1 (D76A) and YFP-STIM1 (L74P) transfected cells had increased basal Ca 21 concentrations compared with wild-type YFP-STIM1 and reduced peak and integral responses to CPA-induced SERCA inhibition (see Fig E3,B and C). However, in contrast to the EF-hand mutant YFP-STIM1 (D76A), YFP-STIM1 (L74P) did not demonstrate reduced SOCE after CPA washout and Ca 21 restoration, suggesting that the previously reported desensitization of SOCE observed with the YFP-STIM1 (D76A) mutant does not occur with the YFP-STIM1 (L74P) mutant form. Therefore the L74P mutation appears to result in a distinct molecular phenotype compared with the loss of function observed in immunodeficient patients and the constitutive activation observed in patients with myopathy. This study is the first to report recessive STIM1 mutations in patients presenting with AI and hypohidrosis without overt clinical immunodeficiency or myopathy. Clinical immunologic investigations were consistent with abnormal NK cell and T-lymphocyte function that might be expected to be associated with ongoing clinical immunodeficiency. However, despite severely abnormal SOCE, this was not the case in these patients. Missense mutations affecting the EF-hand can have very different clinical phenotypes with respect to the immune system, muscle, sweating, and enamel formation. This has important implications for clinical evaluation, as well as understanding the biological functions of STIM1. We thank the family for participating in this study. We thank Dr Gareth Howell for technical assistance with cell sorting and Dr Peter Baxter, Consultant Paediatric Neurologist at Sheffield Children's NHS Foundation Trust, for his comments. We thank the Exome Aggregation Consortium and the groups that provided exome variant data for comparison. A full list of contributing groups can be found at http://exac. Antigen-presenting epithelial cells can play a pivotal role in airway allergy To the Editor: Professional antigen-presenting cells (APCs; ie, dendritic cells, macrophages, and B cells) react against exogenous antigens and initiate an adaptive immune response by presenting antigen peptides in the groove of the MHC class II molecules. During inflammation, ectopic expression of MHC class II has been reported on cells from multiple tissues, including the nasal mucosa, suggesting an antigen-presenting capacity of epithelial cells (ECs). [1][2][3][4] The present investigation was designed to examine the contribution of nasal epithelial cells (NECs) to the allergic inflammatory process. The abilities of NECs to take up antigen, express MHC class II and costimulatory molecules, and stimulate antigen-specific activation and proliferation of CD4 1 T cells were investigated by using a human mucosal specimen (see the Methods section in this article's Online Repository at www. jacionline.org). First, the cell-surface expression of MHC class II and costimulatory molecules on human and mouse nasal epithelial cells (MNECs) was confirmed (see Figs E1 and E2 in this article's Online Repository at www.jacionline.org). Then the ability of MNECs to present the antigen ovalbumin (OVA) to naive T cells was demonstrated. MNECs from sensitized mice displayed an enhanced MHC class II expression on coculture Participating family A consanguineous family of Pakistani heritage was reviewed in the clinical genetics clinic with regard to intolerance to warm environments and generalized dental enamel defects of both dentitions. Sample collection was performed after obtaining informed consent from the patients according to the principles of the Declaration of Helsinki and after local ethics approval. Detailed clinical evaluation was undertaken in appropriate clinical settings. Genetic mapping DNA was extracted from blood by using standard procedures. DNA from the 2 affected subjects was genotyped with Affymetrix 6.0 SNP microarrays (Affymetrix, High Wycombe, United Kingdom), and regions of homozygosity were identified by using AutoSNPa software. E1 Linkage was confirmed by means of analysis with fluorescence-labeled polymorphic microsatellite markers on a genetic analyzer (3130xlGenetic Analyzer; Applied Biosystems, Warrington, United Kingdom) using genotyping software (GeneMapper version 4.0; Applied Biosystems). Linkage analyses were performed with LINKMAP and MLINK from the FASTLINK software package. E2 Clonal and Sanger sequencing We designed a SureSelect Target Enrichment Reagent (Agilent Technologies, Edinburgh, United Kingdom) targeting coding exons within the disease interval in parallel with the capture of disease intervals for 7 other unrelated disorders. The affected subject IV:N was sequenced with 80-nt reads on an Illumina (San Diego, Calif) GAIIx sequencer. Raw data were processed with the Illumina pipeline (version 1.3.4), and reads were aligned to the human reference sequence (hg19/GRCh37) by using Novoalign software (Novocraft Technologies, Selangor, Malaysia). Alignments were processed in the SAM/ BAM format E3 with Picard and the Genome Analysis Toolkit (GATK) E4,E5 to correct alignments around indel sites and to mark potential PCR duplicates. Variants were called in the Variant Call Format by using the Unified Genotyper function of GATK. Filtering of common variation and prediction of functional consequences of variants were performed by using in-house scripts. PCR products for STIM1 exon 2 and STK33 exon 3 were amplified and sequenced by using the primer pairs shown in Table E1. PCR product cleanup was performed with ExoSAP-IT (Affymetrix) before Sanger sequencing with the BigDye Terminator Cycle Sequencing Kit, version 3.1 (Applied Biosystems) and analysis on an ABI 3130XL DNA analyzer (Applied Biosystems). Flow cytometric analysis of calcium flux PBMCs were labeled with Dulbecco modified Eagle medium containing 5 mmol/L Indo-1 for 45 minutes at 378C and then washed and cooled on ice. Cells were incubated for 20 minutes on ice with 5 mg each of unconjugated CD16 (3G8) and CD3-PerCP (OKT3; BD Biosciences, San Jose, Calif) antibodies and costained for gating markers CD19 (SJ25C1) and CD56 (NCAM16.2; BD Biosciences). Cells were washed and resuspended in cold HBSS without calcium. Samples were warmed to 378C and immediately collected on a UV laser equipped LSRII flow cytometer for 90 seconds and then spiked during collection with 1:100 goat anti-mouse antibody for a further 60 seconds (Jackson Laboratory, Bar Harbor, Me), followed by a 1:100 dilution of 200 mmol/L CaCl 2 in PBS solution, and collected for a further 9 minutes. Alternatively, samples were stimulated with the calcium ionophore ionomycin at 500 ng/mL (Sigma-Aldrich, St Louis, Mo) or 1 mmol/L thapsigargin (Sigma-Aldrich) to deplete ER stores of calcium, thereby triggering SOCE and an intracellular calcium ([Ca 21 ] i ) flux. Analysis was performed with FlowJo software (TreeStar, Ashland, Ore), calculating the ratio of calcium-bound to free Indo-1. NK cell responses PBMCs were isolated from diluted blood by means of Ficoll separation, followed by NK cell purification by means of negative selection (with immunomagnetic reagents from Miltenyi Biotec, Bergicsch Gladbach, Germany). Isolated NK cells were stimulated with K562 tumor cells alone or in combination with 20 ng/mL IL-12/IL-18 (PeproTech, Rocky Hill, NJ; to maximize IFN-g by tumor-stimulated cells) and incubated for 6 hours at 378C with both GolgiStop and GolgiPlug (BD Biosciences). Cells were stained for the surface markers CD107a (clone; H4A3), CD56 (NCAM16.2), and CD3 (OKT3; BD Biosciences) before fixation for 15 minutes and permeabilization for 30 minutes with the AbD Serotec (Oxford, United Kingdom) intracellular staining kit. Cells were stained with anti-IFN-g (B27) and collected on an LSR II flow cytometer and analyzed in DIVA software (BD Biosciences). STIM1 constructs for transfection studies YFP-STIM1 (Addgene plasmid 18857) and the EF-hand mutant YFP-STIM1 (D76 A; Addgene plasmid 18859) constructs were provided by Tobias Meyer through Addgene (Cambridge, Mass). The ORAI1-CFP construct was provided by Anjana Rao (Addgene plasmid 19757). The L74P mutant YFP-STIM1 was produced by means of site-directed mutagenesis of the wild-type YFP-STIM1 plasmid by using the QuikChange II kit (Agilent Technologies, Santa Clara, Calif) per the manufacturer's instructions. The sequences of all 4 constructs were confirmed by means of Sanger sequencing, as above. working distance, 0.12 mm; Nikon, Tokyo, Japan). Cells were maintained at 378C and perfused with standard bath solution; ER store depletion was induced by 2 mmol/L thapsigargin. The plasma membrane was illuminated by using TIRF with a 488-nm argon laser (Prairie Technologies, Middleton, Wis), which was projected onto the specimen through the lens. Images were collected on an electron-multiplying CCD camera (DQC-FS, Nikon) by using NIS Elements imaging software, version 3.2 (Nikon), which was also used for analysis. Fluorescence intensities were background subtracted after acquisition and normalized to the initial intensity (F 0 ). Calcium measurements in overexpressing cells HEK293 cells were doubly transfected with ORAI1-CFP and either wildtype YFP-STIM1, mutant YFP-STIM1 (D76A), or mutant YFP-STIM1 (L74P). Twenty-four hours after transfection, cells expressing both CFP and YFP constructs were selected by using a Becton Dickinson FACSAria II cell sorter (BD Biosciences) and plated on glass coverslips. In each case basal [Ca 21 ] i levels were recorded, after which Ca 21 was removed from the perfusate (replaced with 1 mmol/L ethyleneglycol-bis-(b-aminoethylether)-N,N,N9,N9-tetra-acetic acid), and new basal levels of [Ca 21 ] i were determined. Cells were then exposed to CPA (100 mmol/L), and the resultant transient increases in [Ca 21 ] i levels were measured for peak amplitude and integral. After washout of CPA, Ca 21 (2.5 mmol/L) was readmitted to the perfusate, and capacitative Ca 21 entry was quantified as the maximal increase in [Ca 21 ] i observed. Data are presented as representative examples (see Fig E1, B) and mean 6 SEM values (see Fig E1, C) determined from 12 control recordings, 12 recordings of D76A expressing mutants, and 13 recordings of L74P expressing mutants. Statistical significance was determined by means of ANOVA. Identification of a novel homozygous missense p.L74P change in STIM1 Autozygosity mapping identified a single region of homozygosity shared by both affected cousins on chromosome 11 between rs11606404 and rs3815045 (chr11 :2,241,215-61,669,946, hg19). Multipoint linkage analysis of markers D11S921, D11S899, D11S915, and D11S4949 against disease by using LINKMAP results in a maximum LOD score of 3.06 at marker D11S899. On merging of overlapping exon intervals, the disease interval contained 3,838 RefSeq coding regions comprising 751,450 bp, 3,784 (739,189 bp or 98.4%) of which could be targeted while avoiding designing baits over repeat masked regions. After target enrichment, sequencing, alignment, and postprocessing, 94.6% of targeted bases were covered by 5 or more nonduplicate reads with a minimum Phred-like base quality score of 17 and minimum read mapping quality of 20. A total of 526 variants passing standard GATK filters were identified within 20 bp of a coding exon within the disease locus. Variants were removed if present in dbSNP129 or in later versions of dbSNP with a minor allele frequency of 1% or greater, if present in other samples sequenced locally (n 5 31), or, in the case of missense variants, if predicted to be benign by using PolyPhen-2. E6 After these filtering steps, only 3 homozygous variants remained that might be predicted to alter gene function. The first of these (NM_152316: c.3G>C) was considered unlikely to be pathogenic despite altering the initiation codon of ARL14EP because of the presence of another in-frame initiation codon immediately adjacent to the mutated codon and lack of conservation of the first of these ATG codons in mammals. Of the remaining 2 changes, a missense mutation in STK33 (NM_030906: c.146G>A; p.G49D) was found in 4 of 96 ethnically matched control samples, whereas a missense mutation in STIM1 (NM_003156: c.221T>C; p.L74P) was excluded in 192 ethnically matched control samples and found to segregate as expected for a recessively inherited disease within the family. Subsequent interrogation of the Exome Aggregation Consortium database showed that although the STK33 variant was present at a frequency of 1.49% in subjects of South Asian ancestry, the STIM1 variant was not detected at all in the cohort of 60,706 subjects (Exome Aggregation Consortium, Cambridge, Mass; http://exac.broadinstitute.org; accessed February 2015). Accordingly, the homozygous c.221T>C; p.L74P mutation identified in STIM1 was therefore considered to be the cause of the observed phenotype based on genetic data and the phenotypic overlap with previously reported recessive STIM1 and ORAI1 mutations. Representative electropherograms are shown alongside the pedigree. B, The hypomineralized AI was characterized by opaque discolored enamel on clinical examination, with radiographs of unerupted teeth consistent with a near-normal volume of enamel and a clear difference in radiodensity between enamel and dentine. *Teeth that have been restored. C, Schematic illustration of STIM1 protein showing the domain structure. Positions of the AI and hypohidrosis-associated L74P mutation (red), dominant TAM or Stormorken syndrome mutations (grey), and recessive syndromic immunodeficiency mutations (black) are indicated above the protein. E-rich, Glutamate-rich region; K, lysine-rich region; MLS, microtubule tip localization signal; P, proline/serine-rich region; SAM, sterile a-motif domain; SOAR, STIM1 Orai1activating region; TM, transmembrane domain. D, Alignment of STIM1 EF-hand orthologous protein sequences. Although p.L74 is conserved in mammals, it is not as strongly conserved as amino acids mutated in dominant TAM. E, NMR structure of STIM1. E7 L74 is shown in red, TAM mutations are shown in dark gray, and Ca 21 binding residues, mutation of which cause constitutive STIM1 activation, are shown in yellow. Substitution of leucine 74 for proline is anticipated to distort the EF-hand loop, interfering with conformational changes in the presence/absence of Ca 21 . Fuchs et al, 2012 E10 Wang et al 2014 E11 Schaballie et al, 2015 E12 This study Bohm et al, 2013 E13 Morin et al, 2014 E14 Individual (AR) or diagnosis (AD) P r 1, P r 2, and P r 3* P r 4 P r 5 and P r 6 P r 7 P r 8 and P r 9 V2 and V3 Tubular aggregate myopathyà NA P r 5 alive (HSCT) P r 7 lost to follow-up at 5 y P r 8 and P r 9 alive V2 and V3 alive All alive All alive AIHA, Autoimmune hemolytic anemia; AD, autosomal dominant; AR, autosomal recessive; CK, creatine kinase; HSCT, hematopoietic stem cell transplantation; ITP, idiopathic thrombocytopenic purpura; NA, not applicable; NC, no comment made; NR, comment made but feature not recognized. *Mutation confirmed in P r 1 and P r 3; no DNA sample available for P r 2. Mutation identified after death. àA missense change reported in tubular aggregate myopathy and the missense change reported as the cause of Stormorken syndrome have also been identified as the causes of York platelet syndrome, which is characterized by myopathy and platelet abnormalities (Markello et al, 2015 E15 ).
2018-04-03T03:31:31.464Z
2016-03-01T00:00:00.000
{ "year": 2016, "sha1": "d3bb2359d20643416b80ba07e2f2dbd45ad34f50", "oa_license": "CCBY", "oa_url": "http://www.jacionline.org/article/S0091674915013688/pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "d3bb2359d20643416b80ba07e2f2dbd45ad34f50", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
267047760
pes2o/s2orc
v3-fos-license
The Role of the Election Supervisory Agency for West Halmahera Regency in Resolving Law Violations at the Election and Vote Count Stages in the 2019 Election The problems in this research consist of: What are the forms of violations that occur at the voting & counting stages in the 2019 simultaneous general election in West Halmahera Regency What is the role of Bawaslu Halbar in resolving violations that occur during the holding of general elections simultaneously year 2019 in the West Halmahera Regency area. The location of field research was carried out in West Halmahera district. The type of normative-empirical research uses a statute approach and a conceptual approach. Types and sources of data are primary legal data and secondary legal data. Data collection techniques are carried out. by interviewing and documenting the data for further qualitative analysis. The purpose of this research is to conduct a study and analysis to find out what forms of violations that occur at the voting and counting stages in the 2019 simultaneous general elections in the West Halmahera Regency area. To find out how to resolve violations that occur during the holding of general elections simultaneously in 2019 in the West Halmahera Regency area. Whereas the forms of violations handled by Bawaslu Halbar in dealing with founding violations totaled 11 (eleven) election violations, and in handling only 3 (three) election violations recommended by the Integrated Law Enforcement Center. Besides that, 1 (one) finding that has been decided has permanent legal force. Meanwhile, other findings were not continued because the elements of election violations were not fulfilled based on the results of the study of alleged violations. The forms of violations are as follows: INTRODUCTION Bawaslu Kab.Halbar has the authority to oversee the implementation of voting, counting and recapitulation of votes with a total of 379 polling stations divided among others: 108 TPS kec.Jailolo,53 TPS kec. Jailolo Selatan,37 TPS Sahu,31 TPS Kec. East Sahu,40 TPS Kec. Ibu Selatan,32 TPS kec. North Mother,35 Kec.Mother and lastly 43 kec.Loloda.The problems that arise in the supervision of voting, counting and vote recapitulation are violations, irregularities, fraud & manipulation such as the use of voting rights by unauthorized people, money politics problems, voter mobilization, irregularities in voting and counting procedures and procedures, irregularities in writing the minutes , the officer did not provide the minutes and the C1 form as well as the writing error.From the results of supervision at the collection stage, Bawaslu Kab.Halbar found several special incidents.In addition to special incidents there were also vote counting & recapitulation starting from the TPS, PPK & Kab levels, there were changes in the numbers in the votes acquired by the President & Vice President, candidate members, DPR, DPD, Provincial DPRD and Kab. as well as money politics & campaigns at the time of voting.Not yet, supervision at non-stages is like the neutrality of ASN, Money Politics, and the Politicization of SARA. From the findings above, Bawaslu Kab.Halbar recommended to the West Halmahera KPU, namely the re-counting of votes at the PPK level (Jalan Baru Village, Tuada Village and Ropu Tengah Bolu Village), & Vote Re-voting occurred in (Tedeng Village, Gamlamo Village and Sasur Village).It is hoped that with the problems that occurred in the 2019 Election, the KPU will pay more attention to this matter, and will focus more on the socialization stages and technical guidance to organizers at that level.10The supervision of Bawaslu Halbar shows that the implementation of the Election in Kab.Halbar has been running effectively as mandated by the Law / not.This hypothesis will be tested by prospective researchers at the research stage RESEARCH METHODS In this study, prospective researchers chose to use the type of sociolegal research (juridical-empirical) with a good qualitative approach to describe, explain, find the quality of problems in the field and quantitative to quantify (measure) data based on those obtained in order to test the previously established hypotheses. in the background of the problem.To obtain data in this study, observations and interviews were conducted to obtain specific information from the informants, namely the Bawaslu of West Halmahera Regency, related agencies and the community based on the sampling system and the population using a questionnaire instrument.The collected data was then converted using predefined categories / criteria and then qualified for analysis Handling of Findings and General Election Violation Reports.The time for handling allegations of violations is calculated according to the working day or from the day on which the Election Supervisor finds out and / or finds the alleged violation.The results are reported or set forth in the supervision form and discussed in the Plenary Meeting to determine whether or not there is an alleged election violation general election, the election supervisors determine the findings to be registered or recorded in the Election register book.In this regard, the forms of violations handled by Bawaslu Halbar in handling Findings violations totaled 11 (eleven) election violations, and in handling only 3 (three) election violations recommended by the Integrated Law Enforcement Center.In addition, 1 (one) finding that has been decided has permanent legal force.Meanwhile, the other findings were not continued because the elements of election violations were not fulfilled based on the results of the study of alleged violations.What was handled by Bawaslu Halbar was inseparable from the findings on the results of supervision.Some of the violations handled by Bawaslu Halbar which is found from the results of the supervision of Panwaslu at the District level (Panwascam).However, based on the results of the analysis of the alleged violations found by the sub-district contained elements of violations of election criminal acts, Bawaslu Halbar took over the alleged violations to be followed up in accordance with statutory provisions and then registered at the Bawaslu Halbar level.As for the findings of alleged election violations handled by Bawaslu Halbar, as described in the table below: B. ROLE OF BAWASLU HALBAR IN THE SETTLEMENT OF VIOLATIONS IN THE IMPLEMENTATION OF THE 2019 CONSTITUTIONAL GENERAL ELECTIONS IN WEST HALMAHERA REGENCY In carrying out its duties as a supervisory agency for the implementation of elections in the regency, Bawaslu Halbar has the role of preventing and prosecuting election violations and election process disputes, supervising the preparation for election administration.In addition, the duties of Bawaslu Halbar which are carried out in the context of preventing election violations and preventing election process disputes are identifying and mapping potential vulnerabilities.and election violations coordinate, supervise, guide, monitor and evaluate election administration, coordinate with relevant government agencies and increase public participation in election supervision. For tasks relating to efforts to prosecute election violations, there are 3 (three) types of violations in the Election, namely: 2) Violation of the code of ethics of election organizers Violation of the ethics of Election administrators based on oaths and / or promises before carrying out their duties as election administrators. 4) Election administration violations Violations that include procedures, procedures, and mechanisms related to the administration of the implementation of the Election in every stage of the Election, outside of election crimes and violations of the code of ethics of Election administrators. 5) Election crime Criminal acts of violations and / or crimes against the provisions of Election criminal acts as regulated in the UUNo.8 of 2012.As for the working area of Bawaslu Halbar is regulated in Article 71 of Law Number 15 Year 2011 concerning the Implementation of General Elections, Regency Bawaslu is domiciled in Regency Cities.The main characteristics of an independent election supervisor are:47 7. Formed by means of a constitution or law 8.Not easy to be intervened by certain political interests; 9. Responsible to parliament; 10.Carry out duties in accordance with the Election / Pilkada stages; 11.Have good integrity and morality, and 12. Understand the procedures for implementing the General Election / Pilkada.In this way, the Supervisory Committee is not only responsible for the formation of a democratic government, but also takes part in getting the people to choose candidates they think are capable.That Bawaslu Halbar has carried out its role as a supervisory agency for the implementation of the 2019 simultaneous elections very well.This can be seen from the actions of Bawaslu Halbar in handling violations both in the form of findings and reports of general election violations.Bawaslu Halbar handles findings of alleged violations according to working days or from the day the public finds out about and / or reports the alleged violation.Then the results are reported or stated in form B-1 and discussed in the Plenary Meeting of the Follow-up of Initial Information on Alleged Violations to determine whether or not there are allegations of election violations, and if the election supervisor states that there is a violation of the general election, the election supervisor determines to be registered or recorded in Election register book.Bawaslu Halbar handles reporting violations totaling 8 (eight) election violations, and in handling only 1 (one) election violation that fulfills the formal and material elements.Meanwhile 2 (two) Election Violation Reports will be withdrawn by the Reporting Party.Other reports cannot be followed up because the violation elements and / or material elements are not fulfilled based on the results of the plenary meeting to follow up the initial information on the alleged violation.During the process, the election stages took place starting from the data updating stage and the recapitulation plenary session at each level, 8 (eight) reports of alleged violations from the public were received and 2 (two) of them were withdrawn by the reporter.reports that have been registered cannot meet sufficient evidence so that they are terminated, while 1 (one) other report has been reviewed and recommended to the Integrated Law Enforcement Center (Gakumdu) or related agencies because it fulfills the elements of an election criminal offense. . DISCUSSION A. FORM OF VIOLATIONS IN VOTING AND VOTE COUNTING STAGE IN THE IMPLEMENTATION OF THE 2019 CONSTITUTIONAL GENERAL ELECTION IN WEST HALMAHERA REGENCY been decided has permanent legal force.Meanwhile, other findings were not continued because the elements of election violations were not fulfilled based on the results of the study of alleged violations.The forms of violations are as follows: (1) Election Crime, there are 10 (ten) cases of criminal violations handled by Bawaslu Halbar.However, the handling process can be forwarded to the Police for only 3 (three) cases based on the results of the Bawaslu Halbar study; and (2) Violation of Liannya's Law, namely the findings of Muhammadun Hi's alleged violation of ASN neutrality.Adam on April 6, 2019 and has been registered with number 03 / TM / PL / KAB / 32.03 / IV / 2019.Based on the results of Bawaslu Halbar's study, the findings of the alleged violation of ASN Neutrality were given a recommendation to be sanctioned by the State Civil Apparatus Commission (KASN).As for the form of Election administration violations, Bawaslu Halbar did not receive reports or findings during the 2019 simultaneous elections.In addition, in terms of violations of the code of ethics, Bawaslu Halbar did not find any violations of the code of ethics committed by Election Administrators in the West Halmahera Regency either by election organizers.still namely the Election Commission of West Halmahera Regency and the organizer of the Ad Hoc Election.
2024-01-20T16:07:58.260Z
2021-03-08T00:00:00.000
{ "year": 2021, "sha1": "9a00d113dcb9f1710ac8a5c5adcd9759e8cbc4c3", "oa_license": "CCBYSA", "oa_url": "https://ejournal.unkhair.ac.id/index.php/klj/article/download/3116/2054", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "53f01ee5004ee7e84cbc8643f60c9b11e77b2d50", "s2fieldsofstudy": [ "Law" ], "extfieldsofstudy": [] }
30379179
pes2o/s2orc
v3-fos-license
Drug-eluting stents In their recent systematic review, Suzanne Ligthart and associates compared analyses of the cost-effectiveness of drug-eluting stents.[1][1] They found that in most studies in which an incremental cost-effectiveness ratio greater than $50 000 per quality-adjusted life-year was calculated the study II. Aetna considers FDA-approved drug-eluting stents medically necessary for the treatment of intra-coronary stent re-stenosis. See also CPB 0625 -Dysphagia Therapy (0625.html). Background Drug-eluting coronary stents (DES) are placed during a percutaneous transluminal coronary angioplasty (PTCA), a procedure to dilate (widen) narrowed arteries of the heart. A catheter with a deflated balloon at its tip is inserted into a blood vessel in the arm or groin and advanced to the narrowed part of the coronary artery. The balloon is then inflated, pressing against the plaque and/or fatty materials and enlarging the inner diameter of the blood vessel so blood can flow more easily. The balloon is deflated and the catheter removed. If a stent is to be placed, a stent delivery catheter is then threaded up into the affected area and a stent is left in place. Coronary stents are expandable metal mesh tubes that push against the walls of a coronary artery to keep it open. Due to problems with restenosis following the placement of these stents, drug-eluting stents were designed. Drug-eluting stents are covered with a drug (e.g., everolimus, sirolimus, zotarolimus, paclitaxel, or ridaforolimus) that is slowly released to help prevent build-up of new tissue that grows in the artery, thereby preventing stenosis. Examples of US Food and Drug Administration (FDA) approved drug-eluting coronary stents may be found on the FDA website. Proprietary The use of stents has improved the results of percutaneous coronary re-vascularization. However, in-stent restenosis can occur due to neointimal proliferation of connective tissue. Prior to utilization of coronary stents, restenosis ranged between 32 to 55 % of all angioplasties. With the placement of bare metal stents (BMS), the rate of restenosis dropped to 17 to 41 %. The advent of DES, especially 2nd generation, and drug-coated balloon further reduced restenosis rates to less than 10 % (Buccheri et al, 2016). The macrolide anti-fungal agent sirolimus (rapamycin) has been shown to inhibit the proliferation of lymphocytes and smooth muscle cells and has been applied to the interior of balloon-expandable stents. The Rx Velocity consists of a stent coated with a mixture of synthetic polymers blended with sirolimus. The Rx Velocity is designed to release 80% of the drug within 30 days after stent implantation. Only a small amount of the drug is required, and systemic side effects from the drug are avoided. The FDA approved the stent based on a review of 2 clinical studies of safety and effectiveness of the sirolimus-eluting stent. In a multi-center, randomized, double-blind, controlled clinical trial conducted in the United States (the SIRIUS study), 1,058 patients were randomly assigned to receive either the sirolimus-eluting stent or an uncoated stainless steel stent. Patients in the SIRIUS study had blockages of 15 mm to 30 mm long in arteries that were 2.5 mm to 3.5 mm wide. Results were similar for both types of stents in the weeks immediately following the procedure, but after 9 months the patients who received the drug-eluting stent had a significantly lower rate of repeat procedures than patients who received the uncoated stent (4.2 % versus 16.8 %). In addition, patients treated with the drug-eluting stent had a re-stenosis rate of 8.9 %, compared to 36.3 % of patients with the uncoated stent. The combined occurrence of repeat angioplasty, bypass surgery, myocardial infarction and death was 8.8 % for drug-eluting stent patients and 21 % for the uncoated stent patients. The types of adverse events seen with the drug-eluting stent were similar to those that occurred with the uncoated stent. The FDA's approval of the sirolimus-eluting stent was also based on the results of a non-U.S. multi-center, randomized, double-blind, controlled clinical trial (the RAVEL study) comparing sirolimus-eluting stents with standard uncoated stents in 238 adults with stable or unstable angina pectoris or silent ischemia and single coronary lesions amenable to Proprietary stenting. Lesions had to be between 2.5 mm and 3.5 mm in diameter, such that they could be covered by an 18 mm stent. Patients with complex coronary lesions, such as those containing substantial calcium or thrombus, were excluded from the study. The investigators reported that use of a sirolimus-eluting stent resulted in the virtual elimination of angiographic evidence of neointimal hyperplasia and re-stenosis and greatly reduced the need for repeated re-vascularization procedures. At 6 months after stent placement, there was significantly less in-stent late luminal loss (a measure of neointimal proliferation) in patients receiving sirolimus-eluting stents than in patients receiving standard, uncoated stents. None of the patients receiving sirolimus-eluting stents had restenosis of 50 % or more of the luminal diameter, compared to 26.6 % of patients receiving standard stents. Within 1 year following stent placement, percutaneous re vascularization had been performed in 22.9 % of recipients of standard, uncoated stents, and in none of the recipients of sirolimus-eluting stents. The investigators concluded that angina patients who received sirolimus-eluting stents had no angiographic evidence of late luminal loss or in-stent restenosis at 6 months after sirolimus-eluting stent placement and a very low rate of cardiovascular events within the year following stenting. The safety and effectiveness of the Cypher stent in smaller diameter arteries or for longer blockage that required more than 2 stents was not studied in either trial. Also, the safety and effectiveness have not been studied in patients who are having a heart attack, patients who had previous intravascular radiation treatment, or patients who had their blockage in a bypass graft. The FDA-approved labeling of the sirolimus-eluting stent warns that patients who are allergic to sirolimus or to stainless steel should not receive a Cypher stent. Caution is also recommended for people who have had recent cardiac surgery and for women who may be pregnant or who are nursing. The FDA required the manufacturer of the sirolimus-eluting stent to conduct a 2,000-patient post-approval study and evaluate patients from ongoing clinical trials to assess the long-term safety and effectiveness to look for rare adverse events that may result from the use of this product. The FDA approved the paclitaxel-eluting stents (Taxus Express Paclitaxel-Eluting Coronary Stent System, Boston Scientific Corporation) for improving luminal diameter for the treatment of de novo lesions less than 28 mm in length in native coronary arteries greater than or equal to 2.5 to less than or equal to 3.75 mm in diameter. Paclitaxel (Taxol) is similar to sirolimus in that it has been shown to inhibit proliferation of connective tissues and smooth muscle. The safety and effectiveness of the Taxus Express stent in smaller diameter arteries or for longer blockage that required more than 2 stents has not been studied. Also, the safety and effectiveness have not been studied in patients who are having a myocardial infarction, patients who had previous intravascular brachytherapy, or patients who had stenosis of a bypass graft. The FDA-approved labeling of the PES warns that patients who are allergic to paclitaxel or to stainless steel should not receive a Taxus Express stent. Caution is also recommended for people who have had recent cardiac surgery and for women who may be pregnant or who are nursing. The FDA required the manufacturer of the PES to conduct a 2,000 patient post-approval study and evaluate patients from ongoing clinical trials to assess the long-term safety and effectiveness to look for rare adverse events that may result from the use of this product. Proprietary The 2011 ACCF/AHA/SCAI guideline for percutaneous coronary intervention states that most studies have defined a "significant" stenosis as 70 % diameter narrowing; therefore, revascularization decisions and recommendations have been defined as 70 % diameter narrowing (50 % for left main CAD). Physiological criteria, such as an assessment of fractional flow reserve (FFR), has been used in deciding when revascularization is indicated. Thus, for recommendations about re-vascularization, coronary stenosis with FFR 0.80 can also be considered to be "significant". on appropriate use criteria for coronary revascularization in patients with stable ischemic heart disease states that "although limitations of the SYNTAX score for certain re-vascularization recommendations are recognized and it may be impractical to apply this scoring system to all patients with multivessel disease, it is a reasonable surrogate for the extent and complexity of CAD and provides important information that can be helpful when making re-vascularization decisions." A SYNTAX score of greater than 22 is associated with more favorable outcomes with CABG. A "significant" coronary stenosis can include 40 to 70 % luminal narrowing with an abnormal FFR, defined as less than or equal to 0.80. Although there are considerable data to support FFR-directed PCI treatment as an option, this concept is not well-established for surgical revascularization. Per UpToDate on "Revascularization in patients with stable coronary artery disease: Coronary artery bypass graft surgery versus percutaneous coronary intervention", Cutlip and colleagues (2018) prefer PCI to CABG in most patients with single vessel non-left main CAD, defined as stenosis greater than or equal to 70 %, or 50 to 70 % with a fractional flow reserve (FFR) less than 0.80 in either the proximal or mid portion of the artery. The authors also state that although it is reasonable to use the SYNTAX score to help guide decision making on whether to recommend PCI versus CABG in patients with non-left main disease , no studies have shown that patients managed using this score have better outcomes than those who have not. The SYNTAX score II was better able to predict long-term mortality in patients with complex CAD than the angiographic SYNTAX score; however, while promising, further validation of this new score is needed. Use of certain drug-eluting stents may be contraindicated in Individuals with known hypersensitivity to: Coronary artery stenting, regardless of stent type, is contraindicated for use in: ▪ Individuals who cannot receive recommended antiplatelet and/or anticoagulant therapy; or ▪ Individuals judged to have a lesion that prevents complete inflation of an angioplasty balloon or proper placement of the stent or delivery device. In a randomized controlled trial (n = 57), Duda et al (2005) examined the safety and effectiveness of the sirolimus-eluting S.M.A.R.T. Nitinol Self-expanding Stent by comparison with a bare stent in superficial femoral artery (SFA) obstructions. These investigators concluded that although there is a trend for greater efficacy in the sirolimus-eluting stent group, there were no statistically significant differences in any of the variables. Dzavik (2005) stated that bifurcation lesions have been recognized as one of the most important challenges facing interventional cardiologists since the start of PCI. The potential of peri-procedural occlusion of the side branch (SB) was found to be significant, resulting in early attempts at protecting the SB with a 2nd guide wire and kissing balloon inflation in order to minimize this risk, and thus improve the procedural and short-term success of the procedure. The advent of stenting significantly improved the safety of the procedure, although, SB success continued to be a challenge. A variety of single-as well as double stenting techniques were developed that improved the safety and shortterm results of PCI involving SB. Long-term success, however, continued to elude, as a consequence of an increased need for target lesion revascularization (TLR) and higher major adverse cardiac event (MACE) rates following PCI of bifurcation lesions. The introduction of drug-eluting stents appears to have brought bifurcation PCI to a new level of long-term efficacy. Specialty bifurcation stents have been developed to provide easy access to the SB, however, these have to date had little impact on practice and have not been adopted widely. Moreover, Iakovou et al (2005) reported that the cumulative incidence of stent thrombosis 9 months after successful drug-eluting stent implantation in consecutive "real-world" patients was substantially higher than the rate reported in clinical trials. Premature anti-platelet therapy discontinuation, renal failure, bifurcation lesions, diabetes, and low ejection fraction were identified as predictors of thrombotic events. At the 2006 European Society/World Congress of Cardiology, results of 3 studies suggested that DES may lead to an increased risk of death and cardiac events compared with BMS. One study suggested an increase in death and Q-wave myocardial infarction (MI) in subjects receiving a sirolimus-eluting stent, while the other indicated that this type of DES might increase non-cardiac mortality. In the first study, Camenzind and associates performed a meta-analysis on randomized clinical studies comparing 1st-generation DES with BMS. Sirolimus-eluting stent trials entailed the RAVEL, SIRIUS, E-SIRIUS, and C-SIRIUS, and included 878 patients fitted with the novel stent and 870 who received BMS. The PES trials entailed TAXUS II, IV, V, and VI, and included information on 1,685 patients fitted with this stent and 1,675 who received BMS. The pooled incidence of death and Q-wave MI combined, analyzed from within the program by time points of follow-up, was significantly higher with the sirolimus-eluting stent than the BMS at 3 years, at 6 % versus 4 %, representing a 33 % relative increase in risk. In PES trials, the incidence of the combined endpoint at 3 years was 3.5 % with the DES compared with 3.1 % for the BMS. Pooling the latest follow-up data from each trial program revealed that the incidence all-cause death or MI was 2.4 % higher with the sirolimus-eluting stent than the BMS (6.3 % versus 3.9 %) and 0.3 % higher for PES than the BMS (2.6 % versus 2.3 %). Further analysis indicated that the rate of total mortality and Qwave MI combined was a significant 38 % higher with the sirolimus eluting stent versus the BMS (p = 0.03), while there was a trend towards a 16 % higher incidence with the PES. These researchers warned against indiscriminate use of 1st-generation DES and said that use of BMS may still be maintained, while awaiting safer 2nd-generation DES. In the second study, Nordmann et al conducted a meta-analysis of randomized controlled trials that compared sirolimus-eluting stents and PES with BMS in their effect on total, cardiac, and non-cardiac mortality using last follow-up data. They found that although there was a trend to benefits with the DES for reducing total mortality at 1 year compared Proprietary with BMS, there was a trend to increased mortality in years 2, 3, and 4 of follow-up. Furthermore, at 2 and 3 years' follow-up, there was increased non-cardiac mortality (cancer, lung disease, and stroke) with the sirolimus-eluting stent versus the BMS (odds ratio = 2.74 and 2.04, respectively), the majority of which related to cancer. These investigators concluded that preliminary evidence suggests that sirolimus-eluting stents but not PES may lead to increased non-cardiac mortality. Follow-up and assessment of cause-specific deaths in patients receiving DES are mandatory to determine the safety of these devices. A third study tracked stent thrombosis rates in 8,000-plus patients enrolled in studies in Holland and Switzerland. Wenaweser reported that over 3 years, the cumulative rate of thrombosis was 2.9 %, but what was disturbing was that the rate was linear --starting at 1.2 % at 30 days (similar to BMS) and then 0.6 % each year thereafter. Unlike BMS, thrombosis did not seem to wane with time, but continued to increase at the same rate, confirming concerns that DES suppress cell growth too much in some individuals, opening the door to thrombosis, which have serious consequences. In the light of ongoing concerns over the safety of DES, the Society for Cardiovascular Angiography and Interventions (SCAI) issued guidelines for use of the devices (Hodgson et al, 2007). The guidelines advise physicians to ensure that patients meet published guidelines' criteria for percutaneous coronary intervention before implantation of any stent. The guidelines also recommend that the physician decide on an individual-patient basis whether a DES, BMS, or surgical revascularization is most appropriate; discuss the risks and benefits with the patient; and document it in the medical record. Taxus. Direct meta-analysis of Xience V versus Taxus showed that Xience V was significantly superior to Taxus in preventing binary angiographic re-stenosis and TVR (p < 0.05 for both). Indirect comparison between Xience V and Cypher, exploiting a recent 16-trial large meta-analysis, showed that Xience V was at least as effective as Cypher in preventing TVR (p = 0.12). The authors concluded that EES (Xience V) appear as a major breakthrough in coronary interventions, and superior efficacy has already been demonstrated in comparison to PES (Taxus). Data available to date also suggested that Xience V is at least as effective as SES (Cypher). Whether long-term results and direct comparison to Cypher will also be favorable remains to be established by future clinical trials. inferiority. There were no significant differences between ZoMaxx and Taxus-treated groups with respect to TVR (8.0 % versus 4.1 %; p = 0.14), major adverse cardiac events (12.6 % versus 9.6 %; p = 0.43), or stent thrombosis (0.5 % in both groups). The authors concluded that after 9 months, ZES showed less neointimal inhibition than PES, as shown by higher in-stent late loss and re-stenosis by qualitative coronary angiography. Although there were no differences in cumulative mortality, re-infarction, or stent thrombosis, the incidence of very late re-infarction and stent thrombosis was increased with these DES. Integrity groups (4.3 % versus 5.9 %, respectively, p = 0.45). The authors concluded that clinical outcomes at 1 year were comparable between the BioNIR and Resolute Integrity stents, and that the BioNIR stent was non-inferior to the Resolute for the primary end-point of angiographic in-stent LLL at 6 months. Aorto-Arteritis (Takayasu Arteritis) In a recent review on advances in the medical and surgical treatment of aorto-arteritis (also known as Takayasu arteritis), an inflammatory vascular disorder that produces arterial stenoses and aneurysms primarily involving the thoraco-abdominal aorta and its branches and the pulmonary arteries, Liang and Hoffman (2005) stated that new drugs that target intimal hyperplasia, as well as drug-eluting stents, deserve to be studied for possible utility as adjuncts to present treatments. Wang UpToDate reviews on "Management of benign esophageal strictures" (Guelrud, 2017) and "Use of expandable stents in the esophagus" (Baron and Law, 2017) do not mention use of drug-eluting stents in the treatment of esophageal strictures (malignant or benign). Stenotic Lesions of Non-Coronary Arteries Shammas and Dippel (2005) year, and it remains unclear whether DES will have a significant impact in patients outcomes for those who undergo the vertebral artery intervention. Venous Stenosis Associated with Dialysis Vascular Access Athappan and Ponniah (2009) 2-year crude mortality risk differences (drug-eluting -bare-metal stents) were determined from vital statistics records, and risk-adjusted mortality, MI, and revascularization differences were estimated using propensity score matching of patients with severely reduced GFR based on clinical and procedural information collected at the index admission. Bioresorbable Stents Bioresorbable stents, or scaffolds, refers to stents and polymers that are fully biodegradable, a complete breakdown and removal of a material over time (Cutlip and Abbott, 2017). An UpToDate review on "Coronary artery stent types in In an editorial that accompanied the afore-mentioned study by Zhang et al, Martin and Hasan (2016) stated that "although this is a major stride for BVS and the Absorb stent, the device is still in the early stage, and long-term post-marketing surveillance will be needed to ensure both safety and efficacy in broader populations". Drug-Eluting and Non-Drug-Eluting Stents in Lower Extremity Peripheral Arterial Disease Kibrik and colleagues (2019) There were 21 lesions that were treated with both nDES and DES and they were excluded from further analysis. The average patient age was 73.2 ± 11.6 years; 68.6 % had hypertension, and 58.1 % had diabetes. There were no statistical differences between the nDES or DES groups with respect to gender, age, laterality, diabetes mellitus, coronary artery disease, gangrene, ulcers, hyperlipidemia, atrial fibrillation, deep vein thrombosis, claudication, critical limb-threatening ischemia, ipsilateral bypass, re-stenosis, thrombosis, limb loss, or ipsilateral amputation. Bi variate analysis showed a higher incidence of hypertension for nDES patients (p = 0.001). There was no statistical difference between Trans-Atlantic Inter Society Consensus II classes and type of stent used (p=0.95). The authors concluded that In this retrospective analysis from 1 institution, the use of an nDES or DES did not result in a statistically significant difference in the rate of thrombosis, re-stenosis, ipsilateral reintervention, or ipsilateral amputation over a 2-year period when involving the CFA, SFA, and above knee popliteal artery. CPT Codes / HCPCS Codes / ICD-10 Codes Information in the [brackets] below has been added for clarification purposes. Codes requiring a 7th character are represented by "+": Code Code Description Other CPT codes related to the CPB:
2017-09-07T06:45:21.738Z
2007-05-22T00:00:00.000
{ "year": 2007, "sha1": "2108628dcdcfe2a45483540408a6fe090cecd5ea", "oa_license": null, "oa_url": "http://www.cmaj.ca/content/176/11/1611.1.full.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "13ba4d0fc63f365bc8eaf84e17f0a1bec96a77b9", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
268819133
pes2o/s2orc
v3-fos-license
Number of steady states of quantum evolutions We prove sharp universal upper bounds on the number of linearly independent steady and asymptotic states of discrete- and continuous-time Markovian evolutions of open quantum systems. We show that the bounds depend only on the dimension of the system and not on the details of the dynamics. A comparison with similar bounds deriving from a recent spectral conjecture for Markovian evolutions is also provided. Introduction Spectral theory is still a hot topic in quantum mechanics.Indeed, quantum theory was developed at the beginning of the last century in order to explain the energy spectra of atoms [1]. In particular, the dynamics of a closed quantum system, namely isolated from its surroundings, is encoded in the eigenvalues (energy levels) of its Hamiltonian [2].Similarly, for an open quantum system under the Markovian approximation [3], studying the spectrum of the Gorini-Kossakowski-Lindblad-Sudarshan (GKLS) generator (the open-system analogue of the Hamiltonian) allows us to obtain information about the dynamics of the system [4]. In spite of this general interplay between spectrum and dynamics, a complete understanding of open-quantum-system evolutions still remains a formidable task.However, a more detailed analysis may be performed if we restrict our attention to the large-time dynamics of open systems.This amounts to study the steady and, more generally, asymptotic states towards which the evolution converges in the asymptotic limit. Besides their theoretical importance, stationary states also play a central role in reservoir engineering [18][19][20], consisting of properly choosing the system-environment coupling for preparing a target quantum state, or in phase-locking and synchronization of quantum systems [21]. Moreover, GKLS generators with multiple steady states [22] may be used in order to drive a dissipative system into (degenerate) subspaces protected from noise [23] or decoherence [24], in which only a unitary evolution, related to purely imaginary eigenvalues of the generator [11], may be exploited for the realization of quantum gates [25][26][27].For this reason, the analysis of stationary states and, more generally, the study of the relaxation of an open quantum system towards the equilibrium is needed for applications in quantum information storage and processing [28][29][30][31]. The asymptotic properties of open quantum systems have also been deeply studied in quantum statistical mechanics.In particular, dissipative quantum phase transitions [32,33], as well as drivendissipative systems [34,35], require the study of the large-time dynamical behaviour of the system.More generally, determining the steady states of an open system sheds light on the transport properties of the system itself.In particular, the existence of discontinuities of the dimension of the steady-state manifold should correspond to a jump for the transport features of the system [36,37]. Finally, open-quantum-system asymptotics naturally emerges in quantum implementations of Hopfield-type attractor neural networks [38].Indeed, the stored memories of such type of network may be identified with the stationary states of its (non-unitary) evolution [39,40]. Despite the much effort devoted to the asymptotic dynamics of open quantum systems, general constraints for the number of steady and asymptotic states of quantum evolutions are still to be found, as far as we know.Besides the theoretical relevance of this problem, they may allow us to elucidate the potential of some of the above mentioned applications. In this Article, we find sharp universal upper bounds on the number of linearly independent steady and asymptotic states of discrete-time and Markovian continuous-time quantum evolutions.Importantly, these bounds are only related to the dimension of the system and not on the properties of the dynamics. The Article is organized as follows.After introducing some preliminary notions in Section 2, we will discuss our main results in Section 3, then we will provide explicit examples proving the sharpness of the bounds in Section 4. Subsequently, before proving the theorems in Section 6, our results will be compared with analogous bounds derived from a recent universal spectral conjecture proposed in [41] in Section 5. Finally, we will draw the conclusions of the work in Section 7. Preliminaries In the present Section we will recall some basic notions about evolutions of finite-dimensional open quantum systems, see also Section 6 for more details. The state of an arbitrary d-level open quantum system is given by a density operator ρ, namely a positive semidefinite operator on a Hilbert space H (d = dim H) with Tr ρ = 1, whereas its dynamics in a given time interval [0, τ ] with τ > 0 is described by a quantum channel Φ, namely a completely positive trace-preserving map (a superoperator) on B(H), the space of linear operators on H [42]. If the system state at time t = 0 is ρ, its discrete-time evolution at time t = nτ , with n ∈ N, will be given by the action of the n-fold composition Φ n of the map Φ, namely, As the Hilbert space H is finite-dimensional, B(H) is isomorphic to the space of complex matrices of order d.We will indicate the space of d × d ′ matrices with complex entries by M d,d ′ (C) and, for the sake of simplicity, Let µ α , α = 0, . . ., n − 1, with n ⩽ d 2 be the distinct eigenvalues of Φ, namely with A α being an eigenoperator corresponding to µ α .The spectrum spect(Φ) is the set of eigenvalues of Φ.Let ℓ α be the algebraic multiplicity [43] of the eigenvalue µ α , so that n−1 α=0 ℓ α = d 2 .It is well known [12] that: i) the spectrum is contained in the unit disk, ii) 1 is always an eigenvalue, namely, iii) the spectrum is symmetric with respect to the real axis, i.e., iv) the unimodular or peripheral eigenvalues µ α ∈ ∂D, the boundary of D, are semisimple, i.e. their algebraic multiplicity ℓ α coincides with their geometric multiplicity.The eigenspace Fix(Φ) corresponding to µ 0 = 1, called the fixed-point space of Φ, is spanned by a set of ℓ 0 density operators, which are the steady (or stationary) states of the channel Φ. Also, the space Attr(Φ) corresponding to the peripheral eigenvalues µ α ∈ ∂D is known as the asymptotic [44] or the attractor subspace [45,46] of the channel Φ, since the evolution Φ n (ρ) of any initial state ρ asymptotically moves towards this space for large times, i.e. as n → ∞, see Section 6 for more details.These limiting states may be called oscillating or asymptotic states, and it is always possible to construct a basis of such states for the subspace Attr(Φ), analogously to Fix(Φ). Note that closed-system evolutions are described by a unitary channel and some unitary U .Importantly, a quantum channel is unitary if and only if spect(Φ) ⊆ ∂D, i.e. all its eigenvalues belong to the unit circle [12].The Markovian continuous-time evolution of an open quantum system is described by a quantum dynamical semigroup [8] ρ where the generator L takes the well-known GKLS form [47,48] L where the square (curly) brackets represent the (anti)commutator, H = H † is the system Hamiltonian, the noise operators A k are arbitrary, and the first and second terms L H and L D in Eq. ( 8) are called the Hamiltonian and dissipative parts of the generator, respectively.Notice that the GKLS form ( 8) is not unique and, in particular, so is the decomposition of L into Hamiltonian and dissipative contributions.L is called a Hamiltonian generator if L D = 0 for one (and hence all) GKLS representation (8). If λ α , α = 0, . . ., m − 1, with m ⩽ d 2 , denote the distinct eigenvalues of L, from the GKLS form one obtains that λ 0 = 0 and, given an eigenoperator X 0 ⩾ 0 corresponding to this eigenvalue, then X 0 / Tr(X 0 ) is a steady state of Φ t = e tL [49].The kernel of L, i.e. the eigenspace corresponding to the zero eigenvalue, will be denoted by Ker(L).Moreover, with Γ α being the relaxation rates of L. These parameters, describing the relaxation properties of an open system [50], may be experimentally measured.A condition for the relaxation rates of a quantum dynamical semigroup, recently conjectured in [41] and which we will call Chruściński-Kimura-Kossakowski-Shishido (CKKS) bound, is recalled in Section 5 in order to investigate its relation with the main results of this work, stated in Section 3. Finally, note that the purely imaginary (peripheral) eigenvalues of L are semisimple and are related to the large-time dynamics of Φ t = e tL , as the space corresponding to such eigenvalues is the asymptotic manifold Attr(L) of the Markovian evolution, see Section 6 for details.Importantly, as for unitary channels, the generator L is Hamiltonian if and only if Γ α = 0 for all α = 0, . . ., m − 1, i.e. all its eigenvalues are peripheral. Bounds on the dimensions of the asymptotic manifolds In this Section we will present the main results of this work, whose proofs are postponed to Section 6.First, let us introduce the quantities involved in our findings.Remember that we denoted with µ α the α-th distinct eigenvalue of Φ and with ℓ α its algebraic multiplicity with α = 0, . . ., n − 1.In particular, ℓ 0 is the algebraic multiplicity of µ 0 = 1, and coincides with the dimension of its eigenspace, the steady-state manifold, i.e. We define the peripheral multiplicity ℓ P of Φ as the sum of the multiplicities of all peripheral eigenvalues, which coincides with the dimension of the attractor subspace Attr(Φ), made up of asymptotic states.Namely, Physically, ℓ 0 and ℓ P are respectively the number of independent steady and asymptotic states of the evolution described by Φ. Analogously, denote with m α (α = 0, . . ., m − 1) the algebraic multiplicity of the α-th distinct eigenvalue λ α of the generator L of the continuous-time semigroup (7).In particular m 0 denotes the multiplicity of the zero eigenvalue λ 0 = 0, so that Moreover, the peripheral multiplicity m P of L is the sum of the multiplicities of its purely imaginary eigenvalues and measures the dimension of its attractor manifold: The integers m 0 and m P represent respectively the number of independent steady and asymptotic states of the Markovian evolution Φ t = e tL generated by L. Now we will provide sharp upper bounds on such multiplicities.Let us call a quantum channel non-trivial if it is different from the identity channel, Φ(ρ) = ρ.Theorem 1 (Unitary discrete-time evolution).Let Φ be a non-trivial unitary quantum channel on a d-dimensional system.Then the multiplicity ℓ 0 of the eigenvalue 1 and the peripheral multiplicity Theorem 2 (Non-unitary discrete-time evolution).Let Φ be a non-unitary quantum channel.Then the multiplicity ℓ 0 of the eigenvalue 1 and the peripheral multiplicity ℓ P of Φ obey The content of the latter result is schematically illustrated in Fig. 1.Now, it is possible to construct quantum channels with ℓ 0 and ℓ P attaining the equalities in Eqs. ( 14) and (15), namely all the upper bounds are sharp, see Section 4 for explicit examples.Obviously, for a trivial quantum channel ℓ 0 = ℓ P = d 2 , therefore the bounds ( 14) and ( 15) are not valid. The above results, valid for discrete-time evolutions (1) are perfectly mirrored by the following results on Markovian continuous-time evolutions (7), with GKLS generators (8). Theorem 3 (Hamiltonian generator).Let L be a non-zero Hamiltonian GKLS generator.Then the multiplicity m 0 of the zero eigenvalue and the peripheral multiplicity m P of L fulfill Theorem 4 (Non-Hamiltonian GKLS generator).Let L be a non-Hamiltonian GKLS generator. Then the multiplicity m 0 of the zero eigenvalue and the peripheral multiplicity m P of L satisfy The bounds ( 16) and ( 17) are also sharp as the previous ones, see Section 4. Clearly, the two latter theorems do not apply to the zero operator because in such case m 0 = m P = d 2 . Theorem 1 provides a tight universal upper bound on the number of linearly independent steady states of a (non-trivial) unitary quantum channel Φ, depending only on the dimension d of the system.Similarly, Theorem 2 shows that the number of linearly independent steady and asymptotic states of a non-unitary channel Φ is bounded from above by the same d-dependent quantity.Theorems 3 and 4 provide analogous constraints for non-zero Hamiltonian and non-Hamiltonian generators respectively and, indeed, Theorem 4 easily follows from Theorem 2, as shown in Section 6. Interestingly, Theorem 4 implies that when we add to a Hamiltonian generator a dissipative part, no matter how small, the peripheral multiplicity m P undergoes a jump larger than the gap ∆ = 2(d − 1), (18) varying linearly with d.Consequently, we have ì forbidden values for m P .The same jump for the peripheral multiplicity ℓ P occurs when we pass from unitary channels to non-unitary ones according to the bound (15). Sharpness of the bounds In this Section we will prove the sharpness of the bounds stated in Theorems 1-4.Let us start with the proof of the sharpness of the bound ( 16) for non-zero Hamiltonian GKLS generators.If we take for some basis {|e i ⟩} d i=1 of H, then it is immediate to check that whence m 0 = (d − 1) 2 + 1 = d 2 − 2d + 2. Furthermore, if we require that the multiplicity ℓ 0 of the corresponding unitary channel Φ = e L attains the inequality in Eq. ( 14).Note that condition (22) guarantees that Φ is not trivial. Let us now turn our attention to the sharpness of the bounds (17) for GKLS generators.Recall that the commutant S ′ of a system of operators S = {A k } M k=1 ⊂ B(H) is defined as Now consider the system S = {A k } N k=1 of diagonal operators with respect to the basis Here, λ 2 , are the eigenvalues of A k , and are the corresponding spectral projections, with I being the identity operator on H.Note that, by construction, the eigenvalues λ 2 have respectively multiplicities m 1 = 1, m 2 = d − 1 for all k = 1, . . ., N .Let us now take into account the generator for which L(I) = 0. We have Here, the second and fourth equalities follow respectively from Proposition 8 and Corollary 5.1 in Section 6, whereas the third one is a consequence of Eq. ( 24).Moreover, as L is non-Hamiltonian by construction, we necessarily have by Theorem 4. A quantum channel saturating the equalities in Eq. ( 15) is simply Φ = e L , with L given by Eq. ( 26). In particular, we can construct a more physically transparent example of GKLS generator saturating the bounds ( 17) by taking S = {P 1 , P 2 }.The associated Markovian channel Φ acts as follows with respect to the basis with Therefore we realize that Φ is a phase-damping channel causing an exponential suppression of the coherences x 12 , . . ., x 1d ∈ X 12 , and we immediately see that it attains the equalities in the bounds (15), in line with the discussion above. Relation with the Chruściński-Kimura-Kossakowski-Shishido bound In this Section we will make a comparison between the bounds given in Theorems 1-4 and similar bounds arising from a recent spectral conjecture discussed in [41].As already noted in Section 1, the real parts of the eigenvalues λ α of a quantum dynamical semigroup Φ t = e tL with GKLS generator L are non-positive.However, it was recently conjectured in [41] that the relaxation rates Γ α = − Re(λ α ) are not arbitrary non-negative numbers, but they must obey the CKKS bound where m β is the algebraic multiplicity of λ β .This upper bound was not proved yet in general, but it holds for qubit systems, while for d ⩾ 3 it is valid for generators of unital semigroups, i.e. with L(I) = 0 , and for a class of generators obtained in the weak coupling limit [41] (see also [51] for further results).Also, it was experimentally demonstrated for two-level systems [52,53]. The CKKS bound (30) implies the following inequalities for non-Hamiltonian generators Indeed, summing Eq. ( 30) over the bulk, i.e. non-peripheral, eigenvalues of L yields where m B = Γα<0 m α is the number of the repeated eigenvalues in the bulk.If L is not Hamiltonian, viz.m B ̸ = 0 as noted in Section 2, this implies that namely the assertion.Interestingly, the CKKS bound (30) implies also the following bound on the real parts x α of the eigenvalues µ α of an arbitrary quantum channel Φ [41] n−1 β=0 where ℓ β is the algebraic multiplicity of µ β .Although Eq. ( 34) does not yield an upper bound similar to Eq. ( 31) for the peripheral multiplicity ℓ P of Φ, the multiplicity ℓ 0 of the eigenvalue µ 0 = 1, i.e. the number of steady states of Φ, satisfies if Φ is not trivial.The proof goes as follows: when ℓ 0 = d 2 we have the identity channel, so suppose Then from Eq. ( 34) one gets Now, the right-hand side of Eq. ( 36) is the arithmetic mean of the set therefore condition (36) is equivalent to require that all the elements of S exceed their arithmetic mean, which is true if and only if N = 0 and which concludes the proof of Eq. ( 35). Furthermore, from Eq. ( 31) it follows that for non-unitary Markovian channels, viz. of the form Φ = e L with L non-Hamiltonian generator.Now let us compare the bounds (31), (35), and (39) arising from the CKKS conjecture (30) discussed in the present Section with the ones stated in Section 3. First, the upper bound (15) for ℓ P is also valid for non-Markovian channels, differently from the bound (39) and, in the Markovian case, it is stricter than Eq. ( 39) when d ̸ = 2. Analogously, the bound in Eq. ( 14) and the one for ℓ 0 in Eq. ( 15) boil down to Eq. ( 35) in the two-dimensional case, but they are stricter otherwise. Similarly, the bounds (31) for m 0 and m P are not tight for all d ⩾ 3, whereas they are equivalent to condition (17) in the case d = 2. Consequently, the jump for m P is also predicted by the bound (31) but ∆ = d, which is loose for all d ̸ = 2.In conclusion, the bounds given in Theorems 1-4 imply the bounds (31), (35), and (39) deriving from the CKKS conjecture (30), in favor of the validity of the conjecture itself. Proofs of Theorems 1-4 In this Section we will prove Theorems 1-4 stated in Section 3. To this purpose, let us recall several preliminary concepts, besides the ones introduced in Section 2. First, given a quantum channel Φ, it always admits a Kraus representation [42], in terms of some operators {B k } N k=1 ⊂ B(H).Note that the second equation in (40) expresses the trace-preservation condition. Note that Φ * has the same eigenvalues with the same algebraic multiplicities of Φ [54].In addition, the spectral projections P and P P onto Fix(Φ) and Attr(Φ) are quantum channels themselves [12].Finally, let M 0 ≡ Ker(L) be the kernel of L, given by Before discussing the proofs of Theorems 1-4, we need a few preparatory results.Consider A ∈ B(H) with spectrum spect(A) = {λ k } N k=1 .If m k , n k are the algebraic and geometric multiplicities of the eigenvalue λ k , let d j,k with j = 1, . . ., n k and k = 1, . . ., N indicate the dimension of the j-th Jordan block corresponding to the eigenvalue λ k of the Jordan normal form J of A [43]. where with i = 1, . . ., m k , k = 1, . . ., N and |I| being the cardinality of the set I. In particular, the equality holds if and only if A is diagonalizable with spectrum spect(A) = {λ 1 , λ 2 } having algebraic multiplicities m 1 = 1 and m 2 = d − 1. Proof. By definition {s where the equality holds if and only if A is diagonalizable, viz.m k = n k for all k = 1, . . ., N .Now, by the fundamental theorem of algebra [43], where the equality holds if and only if A = cI , c ∈ C. If A is a non-scalar matrix, then the maximum value is attained when A is diagonalizable and has spectrum spect(A) = {λ 1 , λ 2 } with multiplicities m 1 = 1 and m 2 = d − 1, and it reads which concludes the proof. Let us now recall several known facts about open-system asymptotics.Let us start with the following definition. If Φ is not faithful, then we can define the faithful channel φ 00 as in Eq. ( 54), therefore we have as a consequence of Proposition 9 where d 0 = dim(H 0 ) ⩽ d − 1 and B 0 = {B 0,k , B † 0,k } N 0 k=1 is the system of Kraus operators of φ 00 .Let us now prove the analogous bound on the peripheral multiplicity ℓ P of Φ. Observe that the spectral projection P P of Φ onto Attr(Φ) satisfies because not all the eigenvalues of the non-unitary channel Φ are peripheral.Indeed, P P is a nonunitary channel as P P is non-invertible.Therefore, since the fixed-point space of P P is Attr(Φ), it is sufficient to apply the bound (58) to P P .□ Proof Theorem 3: Let L be a non-zero Hamiltonian GKLS generator.Then it is straightforward to show that [27] λ where Attr(Φ) is the attractor subspace of the non-unitary channel Φ = e L , the second inequality follows from Theorem 2. □ Notice that the universal bounds given in Theorems 1-4 may also be proved by using the structure theorems on the asymptotic evolution of quantum channels [12,14]. Conclusions and outlooks We found dimension-dependent sharp upper bounds on the number of independent steady states of non-trivial unitary quantum channels and an analogous bound on the number of independent asymptotic states of non-unitary channels.Moreover, similar sharp upper bounds on the number of independent steady and asymptotic states of GKLS generators were also obtained.We further made a comparison of our bounds with similar ones obtained from the CKKS conjecture (30) and (34). Interestingly, the upper bound on the peripheral multiplicity of GKLS generators reveals that adding a dissipative perturbation to an initially Hamiltonian generator causes a jump for the peripheral multiplicity across a gap linearly depending on the dimension, and an analogous remark may be made for the peripheral multiplicity ℓ P of quantum channels on the basis of condition (15).These findings may be framed in a series of works, addressing the general spectral properties of open quantum systems [12,41,51,[56][57][58][59], in particular Markovian ones, and can motivate further study of the spectral properties of channels and generators, far from being completely understood.In particular, the bounds found in this Article may be the consequence of a generalization of the CKKS bound (30) involving also the imaginary parts of the eigenvalues of a GKLS generator.Moreover, structure theorems on the asymptotic dynamics [12,14] may be employed in order to find further constraints for the quantities studied in this work. Figure 1 : Figure 1: Schematic representation of the content of Theorem 2. A system S coupled to a bath B evolves according to the non-unitary discrete-time evolution Φ n with n ⩾ 1.The asymptotic states ρ 1 , . . ., ρ ℓ P of S, spanning the attractor subspace Attr(Φ), are at most d 2 − 2d + 2, where d is the dimension of the system. where h k , with k = 1, . . ., d, are the (repeated) real eigenvalues of the Hamiltonian H. Therefore this implies that the maximum value of the algebraic multiplicity m 0 of the zero eigenvalue of L is d 2 − 2d + 2, obtained by settingh 1 = h 2 = • • • = h d−1 ̸ = h d .The equality m P = d 2 follows immediately from Eq. (60).□ Proof Theorem 4: Let L be a non-Hamiltonian GKLS generator.The first inequality is trivial.Since m P = dim Attr(Φ) = dim Attr(L),
2024-04-02T06:44:47.561Z
2024-03-30T00:00:00.000
{ "year": 2024, "sha1": "71cad675dc9f9c31204b90143905799cf5801d34", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-024-64040-5.pdf", "oa_status": "GOLD", "pdf_src": "ArXiv", "pdf_hash": "71cad675dc9f9c31204b90143905799cf5801d34", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine", "Physics", "Mathematics" ] }
236205799
pes2o/s2orc
v3-fos-license
Effect of polycarboxylate ether-based superplasticizer dosage on fresh and hardened properties of cement concrete Ready-mix concrete nowadays is available almost everywhere in the country. RMC plants are indispensably using high-range water-reducing admixtures namely, sulfonate naphthalene formaldehyde condensate, and Polyether polycarboxylate, for making standard cement concrete. These admixtures are used to economize the cost of concrete by reducing the quantity of water. Commonly used high range water reducer admixture, the polycarboxylate ether-based (PCE) superplasticizer has been used to investigate its influence on workability and strength of concrete by measuring slump and ultrasonic pulse velocity as well as crushing strength. The cement of grade 43 has been used in this work. This superplasticizer was added to the mix to make concrete of grade M60 by weight of cement with 0.15%, 0.25%, 0.35%, 0.45%, and 0.55% without altering the w/c ratio. Experimental results reveal that the restricted dosage of the superplasticizer provided in Plain and reinforced concrete code of practice in Indian standard, IS456:2000, restricts to 1% is on the higher side. The slump, ultrasonic pulse velocity, and crushing strength of M60 grade standard concrete with water to binder ratio of 0.40 gives the optimum dose 0.45% by weight of cement, which is quite less than the restricted quantity of dosage. Introduction High-range water-reducing admixtures are commonly utilized in concrete where demand for pumpable high strength increment is required [1]. In self-consolidating concrete also high range water reducing superplasticizers are employed to produce require flowability and inherent compactability [2]. Mini slump and Marsh funnel tests with varying percentage of PCE SP dose were conducted to find out the saturation dosage, which is the maximum spread diameter beyond which there is no increase, on PPC and OPC+25% fly ash. Saturation dosages of PCE SP were found to be 0.4 and 0.6 in Mini slump and Marsh funnel tests respectively. The value of saturation dosage found out in Mini slump and Marsh funnel tests were reported 0.6 for OPC+25% fly ash [3]. In this experimental work, the influence of SP on plastic shrinkage of conventional and silica fume cement concrete with 7.5% of Type 1, Type 2, and Type 3 having the dosage of PCE SP 0.4, 0.7, and 0.66% respectively were investigated. It was reported that PCE SP showed the highest compatibility concerning to its massive retardation Type 1 and Type 3 silica fume cement concrete [4]. The incorporation of superplasticizer in fine recycled concrete aggregate was studied for mechanical properties. In this study, two types of superplasticizers called SP1, containing lignosulfonate and SP2, whose chemical composition is polycarboxylate was added with 1% by cement weight. Fine aggregate was substituted with fine recycled aggregate with 20, 40, 60, 80, and IOP Publishing doi:10.1088/1757-899X/1166/1/012013 2 100% by weight. It was reported that higher water reducing the power of the additive gave a worse relative performance of FRA in fine recycled concrete aggregate [5]. Polycarboxylate ether-based superplasticizer with 0.3, 0.5, 0.7, and 1% by weight of cement was used to investigate its influence on hydration, microstructure, and mechanical response of cement paste. It was reported that 1% of PCEbased SP in the cement paste produced a higher percentage of silicate at 2 days. Based on the microstructural analysis, it was inferred that the PCE-based superplasticizer slightly decreases the porosity of cement paste [6]. Materials Wonder brand OPC cement of grade 43, fresh lot complying with [7] was purchased from the authorized supplier. This OPC cement was examined for its physical characteristics conforming to BIS [8][9][10][11][12]. The test findings of cement are given in Table 1. Fine and coarse aggregate as natural river sand of zone II as per [13] and crushed stone of size 20mm and 10mm respectively were tested for their physical properties as shown in Table 2 had been found out [14,15]. Natural river bed coarse sand was used as fine aggregate in this study. Other tested characteristics of the coarse aggregate as shown in Table 2 complying with [14,15]. Polycarboxylate ether-based superplasticizer (HRWR) was procured from the authorized supplier (Kunal Chemical Company) used [16]. The characteristics of the superplasticizer used are presented in Table 3. Concrete mix and its ingredients Concrete for the grade of M60 with the desired workability of 100mm was mix design conforming to [17,18]. Cube specimens of standard size with 100mmx100mmx100mm were casted using polycarboxylate ether-based (PCE) superplasticizer with 0.15%, 0.25%, 0.35%, 0.45%, and 0.55% by weight of cement and abbreviated as PCESP. In the present work, the concrete of grade M60 was prepared in the laboratory with varying percentages of superplasticizer to investigate its effect on the fresh and hardened state of the concrete. The binder to aggregate ratio is obtained as 1:1.73:2.48 for water to cement ratio of 0.35. The quantity of concrete ingredients required for 1m 3 of concrete mix of M60 grade is presented in Table 4. Allocation of test specimens and test conducted Concrete moulds of a standard size of 100mmx100mmx100mm were prepared to cast concrete for compressive strength and ultrasonic pulse velocity conforming to [19]. To assess the effect of dosage of PCE based superplasticizer on the fresh and hardened state of concrete, the concrete in its fresh state was tested for slump as per [20], and the hardened concrete was tested for ultrasonic pulse velocity and crushing strength complying with [22,21]. Workability The slump test was conducted to scale the effect of the superplasticizer on workability. Polycarboxylate ether-based SP with 0, 0.15, 0.25, 0.35, 0.45, and 0.55% was added by weight of cement. Scaled slumps of the PCE-based SP concrete are depicted in Fig.1. The addition of a superplasticizer increased the slump. Concrete with 0.15% of SP has given 25mm slump, the degree of workability as per IS456:2000 clause 7.1 is low [18]. Concrete without SP having water to cement ratio of 0.4 was found very low and having a slump of about 15mm may be called zero slump concrete. The concretes with 0.25% and 0.35% of PCE-based SP having a slump of 50mm and 70mm respectively belong to the medium degree of workability. The concretes with 0.45% and 0.55% having a slump of 105mm and 130mm are of a high degree of workability. Fig.2 shows the desired slump (0.45% of SP) of the concrete. Workability The density of concrete without superplasticizer was 2301kg/m 3 . The concretes with additive superplasticizer of 0.15, 0.25, 0.35, 0.45, and 0.55% by weight of cement were found as 2331, 2420, 2434, 2443, and 2430kg/m 3 . It can be mentioned that the density of concrete increases with increasing dosage of superplasticizer by 1.3, 5.17, 5.78, 6.17, and 5.60% as compared to concrete without superplasticizer. Compressive strength Control concrete (without SP) and concretes with varying dosages of SP were tested for crushing strength after 7 and 28 days of curing. The compressive strength findings are given in Table 5 and plotted in Fig. 3 and Fig. 4. The improvement of crushing strength of concrete with superplasticizer of 0.15, 0.25, 0.35, 0.45, and 0.55% after 28 days is found to be 5.68, 11.06, 18.32, 31.93 and 12.94% as compared to concrete without SP. The desired strength as per IS10262:2019 was given by 0.45% of SP content. The concrete with 0.45% of SP gives higher compressive strength. This improvement of compressive strength can be ascribed to decrease in air-entrained, the probable consolidation of the mix due to higher workability, and sweeping hydration of cement owing to dispersion ability of this optimum dosage of superplasticizer. The addition of SP beyond 0.45% dose caused a little bleeding in the mix, which decreased the strength. Ultrasonic pulse velocity The UPV test is conducted to investigate the quality of hardened concrete for homogeneity, cracks, and voids complying with [22]. The concrete samples were taken out of the curing tank after 28 days of normal curing and placed in the concrete laboratory room for 24 hours under an average temperature of 26℃. The UPV test readings are taken on four faces of cube specimens of standard size of 100mmx100mmx100mm with one reading on each face. To ascertain the quality grading of concrete with respect to the pulse velocity, Table 6 gives the recommendations [22]. Concrete with and without superplasticizer was tested for ultrasonic pulse velocity after 28 days. The findings for UPV are tabulated in Table 7, and Fig. 5. Results show that concrete with 0 and 0.15% of SP are of good quality. The concrete with 0.25, 0.35, 0.45, and 0.55% are found to be of excellent quality. It is pertinent to note here that the concrete with 0.15, 0.25, 0.35, 0.45, and 0.55% of SP after 28 days, the UPV test results improved by 3, 4.62, 5.31, 9, and 4.85% as compared to concrete without superplasticizer as plotted in Fig. 6. Figure 5. Relationship between comp. str. and UPV Figure 6. % increase in UPV results due to SP dose Moudulus of elasticity The 28 days elastic modulus of concrete has been obtained from a cube specimen of size 100mmx100mmx100mm under the compression testing machine at the recommended rate of loading 2.3kN/s. The deformation was measured by the deflectometer attached to the steel member welded to the lower platen of the CTM. The load and deformation were captured in the video by an 8-megapixel camera installed. The test arrangement is presented in Fig. 7. The load-deformation data had been used to plot stress-strain curves. The plotted curves are shown in Fig. 8. The moduli of elasticity of concrete with varying percentages of PCE-based superplasticizer have been calculated considering the linear component of the curves. The findings are given in Table 7. The improvement of elastic modulus value of concrete with superplasticizer of 0.15, 0.25, 0.35, 0.45, and 0.55% after 28 days is found to be 37.5, 52.2, 54.8, 54.7, and 28.2% as compared to concrete without SP. Conclusion This study experimented the role and influence of PCE based superplasticizer dosage on the properties of fresh and hardened standard M60 concrete with water to binder ratio of 0.40. On the bases of 28 days experimental findings the following conclusions can be summarized.  The superplasticizer dose of 0.45% in this concrete supported the desired workability with a slump of 105mm. Concrete with a w/c ratio of 0.4 without SP was found giving a slump of 15mm and called very low workable concrete. Concrete with 0.15% of SP dose gave a mix of low workability having a slump of 25mm. Concrete with 0.25%, and 0.35% of SP dose gave a mix of medium workability having a slump of 50mm and 70mm respectively. Concrete with 0.55% of SP dosage caused a slight bleeding and related to the class of high workability.  The concrete with a dose of SP less than 0.45% entrapped the air giving a lower density of the concrete. Ultrasonic pulse velocity and crushing strength results of concrete having a dose of SP less than 0.45% follow the trend of density. The improvement of crushing strength of concrete containing superplasticizer with 0.15, 0.25, 0.35, 0.45, and 0.55% after 28 days is found to be 5.68, 11.06, 18.32, 31.93, and 12.94% as compared to concrete without SP.  Concrete with 0 and 0.15% of SP are of good quality. Concrete with 0.25, 0.35, 0.45, and 0.55% are found to be of excellent quality, however ultrasonic pulse velocity of concrete with 0.45% of SP is the maximum. The increase of ultrasonic pulse velocity of concrete with 0.15, 0.25, 0.35, 0.45, and 0.55% of SP after 28 days is 3, 4.62, 5.31, 9, and 4.85% as compared to concrete without superplasticizer.  Code of practice IS456:2000 in its amendment no. 4 restricts the dosage of superplasticizer to 2% and polycarboxylate-based admixture to 1%. However, experimental results reveal that the maximum percentage of polycarboxylate-based admixture is on the higher side. The experimental results suggest that separate recommendations to restrict the dosages of superplasticizer and polycarboxylate-based admixture be provided for compacting concrete and self-compacting concrete.  The slump, UPV, and crushing strength of concrete of grade M60 with water to cement ration of 0.40 gives the optimum dose 0.45% by weight of cement, which is quite less than the restricted quantity of the dosage.
2021-07-24T20:04:23.818Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "55b302e11f0c87efb7c94ef2ca32605c553d1e32", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/1166/1/012013", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "55b302e11f0c87efb7c94ef2ca32605c553d1e32", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
201655412
pes2o/s2orc
v3-fos-license
Surgical repair of rectovaginal fistulas: predictors of fistula closure Introduction and hypothesis We report the clinical outcome of surgical repair for rectovaginal fistula (RVF) carried out by one operative team. We also investigate the predictive factors for fistula healing. Methods A retrospective cohort of 63 patients underwent local surgical repair of RVF during January 2008 and December 2017 by one operative group. The clinical features of the patients were reviewed. The association between fistula closure and diverse clinical parameters, including operative method, fistula location, prior repair, and diverting stoma, was analyzed. Results Sixty-three consecutive patients underwent 80 local surgical repairs by our surgical team. Forty-five patients eventually healed after an average of 1.22 procedures. The overall success rate per procedure was 71.2%, whereas the closure rate of the first operation was 55.5% (n = 35). The etiology of the fistula did not impact on the success rate of surgical repair. The history of prior repair predicted a lower success rate on both overall procedure (RR = 0.59, 95% CI 0.41–0.85, p = 0.008) and the first repair in our institution (RR = 0.50, 95% CI 0.31–0.80, p = 0.003). There was no difference in closure rate between the stoma group and the non-stoma group. Nevertheless, among the 15 patients who underwent more than one operation in our center, a diverting stoma seemed to be necessary (10 patients healed in the stoma group and none of the patients healed in the non-stoma group, p = 0.02). Conclusions History of prior surgical repair is a risk factor for failure. Diverting stoma did not increase the overall closure rate, but it seemed to be necessary for patients in whom the first operation failed. Introduction Rectovaginal fistulas (RVFs) are abnormal epithelial connections between the rectum and vagina [1], leading to the passage of rectum content into vagina, which causes both physiological and psychological suffering to female patients. So far, RVFs remain a challenging pathological condition owing to the high failure rate of surgical repair. RVFs are usually caused by congenital malformation, obstetric injury, trauma, perianal sepsis, Crohn's disease or are iatrogenic. Spontaneous healing of a fistula is rare and surgical repair is the main treatment for RVF. Surgical repair of RVF is difficult and frustrating, although a series of surgical options are available for RVF, including advanced flap, muscle interposition, plugs, fistula excision. However, the success rate of surgical repair varies from 0 to 80% [2][3][4][5][6]. A large portion of patients undergo more than one surgical repair. Patients and surgeons are both plagued by the unsatisfying success rate as well as permanent stoma, impaired sphincter function, and recurrence after healing of the fistula. The clinical outcome of RVF surgery could be affected by diverse factors, such as etiology, history of prior repair, the medical history of the patient, repair procedure, and diverting stoma. It has been reported that the failure of RVF repair is associated with Crohn's disease and a history of smoking [5]. Several studies have claimed that a diverting stoma did not increase the healing rate of repair [5,7], whereas one study found a diverting stoma necessary in radiation-related RVF [8]. However, fecal diversion was widely applied in RVF patients in clinical practice. In this study, we aim to summarize the clinical outcome of RVF repair performed by our operative group and endeavor to identify the predictive factors of clinical outcome, ultimately providing more clinical evidence for treating RVF. Patient demographics This study is a retrospective chart review analysis of patients who underwent surgical repair of RVF by one operative group (L Cui, W Chen, and Jh Fu) between January 2009 and December 2017. The exclusion criteria, including patients with high fistulas needing a trans-abdominal operation, patients who were treated only with a diverting stoma, and patients who were treated with seton. The success of the repair was diagnosed at least 3 months after the last surgical repair or 3 months after closure of the stoma. Healing of the fistula was defined by the absence of any rectum content discharged from the vagina. Digital rectal examination or colonoscopy were further performed to confirm that the fistula was closed. Unclosed or recurrent fistula was defined as persistence of symptoms and was further confirmed by physical examination and colonoscopy. For patients in whom an un-closed fistula or recurrent fistula was suspected, a piece of gauze was placed in the vagina and a retention enema with methylene blue was carried out. The gauze was then removed 1 h after the enema and patients with blue gauze were diagnosed as having RVF. Clinical data were retrospectively investigated including the following: patient demographics, history of prior repair, characteristics of the fistula, and the surgical repair procedure. To access the long-term outcome, a telephone interview was conducted on whether the patient still had symptoms related to RVF. The study was approved by the IRB of Shanghai Xinhua Hospital. Operative techniques A trans-anal approach fistulectomy was performed as follows. For patients without a diverting stoma, bowel was prepared by oral intake of polyethylene glycol (PEGS). For patients with a diverting stoma, patients received a preoperative cleaning enema. Intravenous antibiotic prophylaxis was given 30 min before surgery (ciprofloxacin and metronidazole). The patient was placed in a jack-knife position and the assistants exposed the rectum using anal retractors. The fistula was identified by methylene blue or a probe. The fistula and scar tissue surrounding it were excised to guarantee that the margin was healthy tissue. Then a dissection in the recto-vagina septum was performed, and the rectal wall and the vaginal wall were dissected. The rectum wound was closed by absorbable suture longitudinally; the vagina wound was also closed or left open for drainage. Intravenous antibiotics were used for 72 h. For patients without a diverting stoma, liquid diet was used for 5 days to guarantee soft and deformed stools. The decision regarding an ostomy for a diverting stoma was made by the senior author (L Cui) depending on the characteristics of the fistula, the history of previous repair, and the patient's wishes. For patients requiring a diverting stoma, a laparoscopic-assisted sigmoid colostomy was performed. For patients in whom the first repair from our institution failed, the next operation was performed at least 6 months after the first operation. Statistical analysis The categorical data were analyzed using Chi-squared test or Fisher's exact test. Student's t test was used to compare continuous variables. p < 0.05 was considered statistically significant. All statistical analyses were performed using the statistical software program R 3.1.4. Odds ratio and risk ratio were calculated using the fmsb package. Patient demographics Our study enrolled 63 patients who underwent 80 local repair procedures. The patients' average age was 32 (range 16-67) years old, with a mean body mass index (BMI) of 21.4 (range 16.5-33.6). Thirty-four patients had undergone prior repair of RVF in other hospitals before admission to our department. The patient's demographics are listed in Table 1. Among the 18 patients with iatrogenically caused RVF, 13 patients had complications after low anterior resection (LAR) of the rectum, 2 after procedure for prolapse and hemorrhoids (PPH), whereas 3 fistulas were secondary to gynecological surgery. Three patients with congenital RVF had accompanying imperforate anus had undergone surgery in childhood. The other 15 patients had congenital fistulas without accompanying malformation. Clinical outcome The median follow-up for the whole cohort was 50 (range 19-97) months. 45 patients were eventually cured after an average of 1.22 procedures. The overall success rate was 71.4%. Thirty-five patients were cured after the first repair and the success rate of first surgery was 55.5% (35 out of 63). The overall success rate and first repair success rate of diverse repair procedures were listed in Table 3. Further analysis suggested that tissue interposition (Martius flap or gracilis interposition) did not improve success rate compared with other procedures (7 out of 13 vs 38 out of 67,p = 0.9). Post-operative complications were observed in 17 procedures (21.25%), including 3 patients with urinary tract infection, 1 patient with pneumonia, 6 with wound dehiscence, 1 with rectal bleeding, and 6 with surgical site infection. One patient with rectal bleeding and 2 patients with surgical site infection required re-operation; the re-operation rate was 3.75%. No peri-operative mortality was observed. To assess the risk of repair failure, multiple clinical factors were analyzed (Table 4). Clinical factors, including age, BMI, etiology, surgical procedure, smoking, and alcohol use, did not affect the closure rate of overall repair and the first repair. A history of previous repair indicated a lower success rate of fistula closure, both in overall procedure (RR = 0.59, 95% CI 0.41-0.85, p = 0.008) and the first repair (RR = 0.50, 95% CI 0.31-0.80, p = 0.003). The role of a diverting stoma in RVF surgery A diverting stoma was usually required in patients with a RVF. In this study, we sought to identify the role of diverting stoma in RVF repair. In this cohort, 12 patients had a stoma before first admission and 15 patients underwent laparoscopic sigmoid colostomy; thus, 27 patients in total were operated on with a stoma, whereas 36 patients were operated without a All of the 21 patients who healed with a stoma underwent stoma closure operation within 3-12 months after repair. One patient manifested the symptom of recurrent fistula in 3 months after stoma reversal, and 3 patients developed wound infection, 2 patients had an ileus after the operation. No re-operation-or stoma reversal-related mortality was recorded. We further sought to determine whether a diverting stoma is necessary in patients in whom a first attempt at repair failed. In this study, 15 patients in whom the first repair at our hospital failed underwent re-operation for local repair. For patients without a stoma, a diverting stoma was routinely recommended. Three patients declined colostomy and were operated on without a stoma whereas 12 did have a stoma. Ten of the 12 patients eventually healed, whereas none of the patients healed in the non-stoma group(p = 0.02). No statistical significance was observed between a successful operation and the size of the fistula (p = 0.77) or the location of the fistula (p = 0.61; Table 5). Thus, it was observed that a diverting stoma is probably necessary in patients in whom the first operation at our institution failed. Discussion In this study, we showed the characteristics and outcomes of local surgical repair of RVF patients with diverse etiologies. Three-quarters of the patients healed eventually whereas in one-quarter of the patients the repair failed. The overall success rate in our study in consistent with most of the previous reports on RVF local repair [5,7]. In our study, the proportion of RVF etiology was quite different from that of previous reports [5,7,15,16]. Congenital malformation, obstetric injury, and iatrogenic cause represented the majority (85%)of cases in our study, whereas inflammatory bowel disease is the leading cause of RVF in the literature [5,7]. This inconsistency is probably because of the relatively low incidence of IBD in China. Adult congenital RVF is common in China; these patients would not seek RVF repair until marriage age, probably because of the lack of prompt medical care and economic support. A large fistula usually manifested in adult congenital RVF patients, and an advanced flap or tissue transposition procedure is recommended to achieve tension-free reconstruction by healthy tissues. Eighteen patients with iatrogenic RVF were included in our patient cohort, among which 13 were secondary to anastomotic leakage after low anterior rectal resection. The incidence of RVF after LAR was reported to be 3% by Watanabe et al. [17] LARrelated RVFs were treated individually. The reported management of these fistulas included conservative treatment, diverting stoma alone, local repair, and trans-abdominal operation. In our department, patients with a fistula in the lower part of the rectum were initially treated with local repair. A diverting stoma was usually constructed in the presence of anastomotic leakage before admission to our department. Regarding the situation, removal of the anastomotic staple pins and stitches surrounding the fistula was pivotal for healing. Successful repair was determined by several factors. Features of the fistula, age, BMI, lifestyle, comorbidities, and history of previous repair directly affected the outcome. In our study, we demonstrated that the history of previous repair predicted a lower success rate in the first operation. Previous studies also implicated recurrent RVF as being the risk factor for failure. Lowry et al. reported that the success rate was 85% in the initial operation and this decreased to 55% at the third attempt [18] . A recent study carried out by Pinto et al. came to a similar conclusion that a history of prior surgery correlated with a higher failure rate [5]. Management of the recurrent RVF is even more difficult than initial repair. The scar tissue surrounding the fistula and the foreign body reaction of the stitches made the operation even more difficult. Some studies claimed that the interval between operations might enhance the success rate; however, in our study, for patients in whom the surgical repair failed, an interval of more than 6 months was routinely recommended to eliminate the local inflammation and edema. Smoking and diabetes were considered to influence the success rate of the operation according to common sense; however, owing to the smaller number of cases, such a correlation was not observed in our study. A diverting stoma is a tough decision for both surgeons and patients. In our practice, patients with the following characteristics were usually advised to undergo a diverting stoma: fistula with a high location and large size; fistulas secondary to LAR; patients were supposed to undergo a gracilis graft; patients in whom the first attempt at our institution failed. The role of diverting stoma in a successful surgical repair was controversial. Consistent with our result, the previous study did not reveal the relationship between the diverting stoma and the success rate [5]. However, most of authors of the published reports would still perform diverting stoma to treat RVF. Since the stoma did not benefit the overall success rate, it is not recommended as a routine procedure for RVF patients. Some studies suggest that a stoma should be considered in patients with a complex fistula or complicated recurrent fistulas [5,19]. To answer the question who benefits from a diverting stoma, we further analyzed our data. For patients in whom the first operation in our department failed, although the sample size was limited, a diverting stoma seemed to be necessary to improve the success rate. Our patients who required a diverting stoma underwent a laparoscopically assisted sigmoid colostomy, which was minimally invasive and had higher acceptance by patients. In this study, diverse surgical procedures for RVF repair were applied. The choice of procedure depended on the feature of the fistula, involvement of the sphincter, and the experience of the surgeon. It is hard to compose a standard algorithm on the procedure for selecting RVF patients, but there are some common concerns and principles when the decision is made. Removal of the fistula tract, together with the surrounding scarred or granulomatous tissues, and the reconstruction of the septum by tension-free tissues with a good supply are the main principles of local surgical repair. A trans-anal approach, including an advanced flap, was frequently applied by the colorectal surgeon, whereas the trans-vaginal or trans-perineal approach was the prior choice of the gynecologist [19,20]. According to our practice, advanced flap or tissue interposition were more favored for the first attempt in patients with a large fistula to achieve a good blood supply and tension-free reconstruction. Fistulectomy was widely used in our study. This operation was suggested for patients with a simple RVF or for those in whom the previous repair failed, but manifested a smaller fistula than before. The advantage of this operation was that it removed the septic fistula and the unhealthy tissue surrounding it, guaranteeing that the suture was applied to healthy tissues. The wall of the rectum, vagina, and rectovaginal septum was separated and reconstructed. A thick and enforced tissue was laid between the rectum and vagina. The reconstruction of the fistulectomy was similar to the transperineal approach; however, it avoided an incision on the perineum [12,13]. With the development of trans-endoscopic microsurgery, such a procedure was reported to be performed with transmission electron microscopy, which provided better exposure and allowed for a precise operation [20,21]. Repair of high RVFs is even more challenging and usually requires a trans-abdominal operation. In our institution, fistulas that could not be accessed via a local approach (trans-anal, trans-vaginal or trans-perineal) were recommended for transabdominal resection and low anastomosis. Owing to the distinct features of fistulas and surgical procedures, we did not include these patients in our study. There were several limitations in this study. First, sphincter status was evaluated by clinical manifestation pre-operatively; imaging of sphincter defects was not routinely performed. Second, functional outcome was also not reported because the follow-up was carried by telephone interview. Since the operative group served in a colorectal surgery-specific department, the study was intented to include more complex fistulas and refractory fistulas. As mentioned above, the current study included a higher portion of adult congenital RVF patients and iatrogenic RVF patients. We also noticed that, compared with the western studies [5], we included more patients with RVFs larger than 1 cm. Despite the fact that congenital RVFs are usually large, this inconsistency may contribute to the selection bias of a tertiary referral center. A prospective cohort study with a larger study size or a randomized clinical trial could be considered to testify our hypothesis that diverting stoma might promote the success rate of RVF repair. In conclusion, in this study we report the clinical outcome of local surgical repair of RVFs in 63 patients. A history of prior surgical repair is a risk factor for failure of the first repair from our institution. A diverting stoma did not increase the overall closure rate, but it seemed to be necessary for patients in whom the first operation from our institution failed. Compliance with ethical standards Conflicts of interest None. Open Access This article is distributed under the terms of the Creative Comm ons Attribution 4.0 International License (http:// creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
2019-08-29T15:14:20.653Z
2019-08-29T00:00:00.000
{ "year": 2019, "sha1": "ec5f7fca408428badf3f91cc66dd4df64e964b51", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00192-019-04082-w.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "ec5f7fca408428badf3f91cc66dd4df64e964b51", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
218971863
pes2o/s2orc
v3-fos-license
Special values of Goss $L$-series attached to Drinfeld modules of rank 2 Inspired by the classical setting, Goss defined $L$-series attached to Drinfeld modules. In this paper, for a fixed choice of a power $q$ of a prime number and a given Drinfeld module $\phi$ of rank 2 with a certain condition on its coefficients, we give explicit formulas for the values of Goss $L$-series attached to $\phi$ at positive integers $n$ such that $2n+1\leq q$ in terms of polylogarithms and coefficients of the logarithm series of $\phi$. 1. Introduction 1.1. Background and Motivation. One of the major sources of constructing L-functions is related to Galois representations. For a number field F , letF be its algebraic closure and G F be its absolute Galois group. Let l be a prime number. Consider a collection ρ := (ρ l ) of l-adic representations which are continuous homomorphisms ρ l : G F → GL(V l ) where V l is a finite dimensional Q l -vector space. Then we say ρ forms a strictly compatible system if there exists a finite set U of places of F such that (i) For all µ ∈ U and all l relatively prime to µ, ρ l is unramified at µ. has coefficients in Q and is independent of l. For example, let E be an abelian variety of dimension g over F and for any i ∈ Z ≥0 , E[l i ] be the group of all l i -torsion points of E inF . Then one can consider V l as the following vector space l to see that the family ρ = (ρ l ) of representations ρ l : G F → GL 2g (V l ) induced from the continuous action of G F on E[l i ] forms a strictly compatible system (see [17] and [30] for details). For each such system ρ = (ρ l ) of Galois representations, one can assign the L-function where µ runs over all finite places not in U and N µ is the norm of the place µ of F . The function L U (ρ, s) converges to an analytic function for s ∈ C when the real part ℜ(s) of s is sufficiently large. We refer the reader to [17] and [33] for further details about the subject. Drinfeld A-modules and L-series. In the present paper, we focus on the special values of an analogue of aforementioned L-series in the positive characteristic case whose construction is due to Goss [23]. Let F q be the finite field with q elements and θ be an independent variable over F q . We define A to be the set of polynomials in θ with coefficients in F q and A + to be the set of monic polynomials in A. Let K be the fraction field of A and K ∞ be the completion of K at the infinite place with respect to the norm | · | ∞ normalized so that |θ| ∞ = q. We also set C ∞ to be the completion of a fixed algebraic closure of K ∞ . Let K sep be the separable closure of K in C ∞ . Let t be another independent variable and set A := F q [t]. For any monic irreducible polynomial w of A, we define K w to be the completion of F q (t) at w. Consider a family ρ = (ρ w ) of continuous representations of Gal(K sep /K) on a finite dimensional K w -vector space V w . For any prime element v of A + , let us set Frob v to be the geometric Frobenius at v. We say ρ forms a strictly compatible system if there is a finite set U ′ of primes of A + such that (i) For all v ∈ U ′ and all w |t=θ ∈ A relatively prime to v, ρ w is unramified at v. (ii) For such v and w, the polynomial has coefficients in F q (t) and is independent of w. Analogously, we define the L-function L U ′ (ρ, n) corresponding to a strictly compatible system ρ = (ρ w ) of representations ρ w : Gal(K sep /K) → GL(V w ) by where v runs over prime elements of A + not in U ′ . For integer values of n, the function L U ′ (ρ, n) converges in the Laurent series ring F q ((1/t)) when n is sufficiently large. For further details, we refer the reader to [7] and [39]. We point out that throughout the paper, by a slight abuse of notation, we continue to denote the value constructed by setting t = θ in L U ′ (ρ, n) by the same notation and hence our L-values will converge in K ∞ . Let L be a field extension of K in C ∞ . We define the twisted power series ring L[[τ ]] with the rule τ c = c q τ for all c ∈ L and set L[τ ] to be the subring of L[[τ ]] containing only polynomials in τ . A Drinfeld A-module φ of rank r is an F q -linear ring homomorphism φ : A → L[τ ] defined by (2) φ θ = A 0 + A 1 τ + · · · + A r τ r so that A 0 = θ and A r = 0. For each 0 ≤ i ≤ r, we call A i the i-th coefficient of φ. One can assign the exponential series exp φ = i≥0 ξ i τ i ∈ L[[τ ]] to φ subject to the condition that ξ 0 = 1 and exp φ θ = φ θ exp φ . The logarithm series of φ is defined with respect to the condition that γ 0 = 1 and θ log φ = log φ φ θ . It is also the formal inverse of the exponential series exp φ in L [[τ ]]. One of the examples of Drinfeld A-modules is the Carlitz module C given by C θ = θ + τ and its relation with the class field theory has been studied by Carlitz [10], [11] and Hayes [27]. For any non-negative integer n, we set [n] := 1 if n = 0 and [n] := θ q n − θ otherwise. The exponential series exp C of the Carlitz module is defined by where D 0 := 1 and D i := [i]D q i−1 for i ≥ 1. Furthermore the logarithm series log C can be given by where L 0 := 1 and L i = (−1) i [i][i − 1] . . . [1] when i ≥ 1. Using the coefficients of log C , one can also define the n-th polylogarithm function log n given by log n (z) = i≥0 z q i L n i whenever |z| ∞ < nq/(q − 1). For more information on Drinfeld A-modules, we refer the reader to [24,Sec. 3 and 4] and [40,Sec. 2 and 3]. Taelman [36] introduced effective t-motives which can be seen as a generalization of Anderson t-motives defined in [1]. Using Anderson's theory [1], one can show that for every Drinfeld A-module φ, there exists a unique corresponding effective t-motive M φ up to isomorphism. Furthermore in [21], Gardeyn proved that one can construct a strictly compatible system of Galois representations ρ = (ρ w ) attached to M φ with a certain choice of the K w -vector space V w (see §2.3 for definitions and details). Let n be a positive integer and L U ′ (M φ , n) be the value of the L-function defined as in (1) corresponding to the system of Galois representations ρ = (ρ w ) attached to M φ . Our main purpose in the present paper is to study certain special values of the L-function L(M φ , n) := L U ′ (M φ , n) when φ is a Drinfeld A-module of rank 2 defined under some conditions on its coefficients. To motivate our main result, we first explain the well-known Carlitz module case: Consider the q-expansion of n given by n = n j q j where 0 ≤ n j ≤ q − 1 and n j = 0 for j ≫ 0. Set Γ n+1 := j≥0 D n j j ∈ A. Thanks to the results of Hsia and Yu [28,Thm. 3.1] and Anderson and Thakur [2,Thm. 3.8.3], we know that for some h j ∈ A and m < nq/(q − 1). The Main Result. Let φ be a Drinfeld A-module of rank 2 defined by where a ∈ L and b ∈ L \ {0}. For any finite set S ⊂ Z ≥0 and a non-negative integer j, let us define S + j := {i + j : i ∈ S}. Set P 2 (0) := {(∅, ∅)} and for any n ≥ 1, we define to be the set of tuples (S 1 , S 2 ) such that S 1 , S 2 and S 2 + 1 are distinct and form a partition of {0, 1, . . . , n − 1}. We call the elements of P 2 (n) shadowed partitions. For any positive integer n, we also set P 1 2 (n) to be the set of shadowed partitions (S 1 , S 2 ) ∈ P 2 (n) such that 0 ∈ S 1 . Define w 1 (S) := 0 if S = ∅ and w 1 (S) := i∈S q i otherwise. Furthermore, for any finite set S ⊂ Z ≥0 with 0 ∈ S, we let w 2 (S) := 0 if S = {0} and w 2 (S) := i∈S\{0} q i if {0} S. For any U = (S 1 , S 2 ) ∈ P 2 (n), we define the component C U of γ n corresponding to U by if U = (S 1 , S 2 ) ∈ P 1 2 (n). We set F 0 := 0 and T 0 := 1. For n ≥ 1, define El-Guindy and Papanikolas [18,Thm. 3.3] showed that for any n ≥ 0, the n-th coefficient γ n of the logarithm series log φ can be given by We now further assume that φ is the Drinfeld A-module as in (3) such that a ∈ F q and b ∈ F × q . Letφ be another Drinfeld A-module given byφ (1) where logφ is the logarithm function ofφ induced by its logarithm series (see Remark 5.6 for details). Our main result which concerns special values of L(M φ , n) at integers n ≥ 2 in a certain domain can be stated as follows (later stated as Theorem 5.9). Theorem 1.1. Let φ be a Drinfeld A-module of rank 2 given by such that a ∈ F q and b ∈ F × q . Then for any positive integer n satisfying 2n + 1 ≤ q, we have The strategy of the proof of Theorem 1.1 and the outline of the paper can be explained as follows: (I) After introducing some preliminaries and notation used throughout the paper, we define, in §2.2, the t-module G n given by the tensor product of a Drinfeld A-module φ of rank 2 and the n-th tensor power of the Carlitz module. We also discuss effective t-motives, Taelman t-motives and their L-series (see §2.3, §2.4 and §2.5 for details). (II) In §3, we analyze the certain entries of the coefficients of the logarithm series Log Gn of G n . Using Papanikolas' method [32,Sec. 4.3] as well as Lemma 3.1 and Proposition 3.2, we relate them to shadowed partitions and Carlitz logarithm coefficients (Corollary 3.3). We also detect some elements living in the convergence domain of the function Log Gn induced by the logarithm series of G n (Theorem 3.5). (III) In §4, we introduce the unit module U(G n /A) of G n (see Definition 4.9) and recall some results on invertible lattices which are due to Debry [16,Sec. 2]. Combining them with Theorem 3.5, we give the generators of the unit module U(G n /A) as an A-module in terms of the values of the logarithm function Log Gn at some algebraic points (Theorem 4.11). (IV) In §5, we apply the work of Anglès, Ngo Dac and Tavares Ribeiro [3] to our construction. We also study the Taelman L-values and show how they are related to the special values of Goss L-series (Proposition 5.7). Finally, we formulate the special value L(M φ , n + 1) and prove Theorem 1.1 by using Theorem 4.11. Remark 1.2. Assume that ̺ is a Drinfeld A-module of r ≥ 2 given by ̺ θ = θ + A 1 τ + · · · + A r τ r . Although our arguments introduce a way to generalize Theorem 1.1 when ̺ is defined with respect to the condition that A i ∈ F q for all 1 ≤ i ≤ r − 1 and A r ∈ F × q , our current method does not allow us to prove similar results when some of the coefficients of ̺ is in A \ F q . This is due to the difficulty of understanding the generators of U(G n /A) in that case. One can also generalize Theorem 1.1 for larger values of n, if a version of Proposition 5.8 for such values of n is understood (see §5.3 for details). We hope to tackle these problems in the near future. Ngo Dac, Nathan Green, Yoshinori Mishiba and Changningphaabi Namoijam for useful suggestions and fruitful discussions. The author also thanks the referee for reading the manuscript carefully and all the suggestions to make the content of the present paper clearer. 2.1. Hyperderivatives. For any non-negative integers i and j, the binomial coefficient i j is given by Furthermore, when k = −i is a negative integer, we define We now define the j-th hyperdifferential operator ∂ j θ : K ∞ → K ∞ with respect to θ by Note that if j = 0, then ∂ 0 θ (g) = g for all g ∈ K ∞ . Let C ∞ ((t)) be the field of formal Laurent series in t with coefficients in C ∞ . For any j ∈ Z ≥0 , we define the j-th hyperdifferential operator ∂ j t : C ∞ ((t)) → C ∞ ((t)) with respect to t by Furthermore, when j = 0, we have ∂ 0 t (f ) = f for any f ∈ C ∞ ((t)) and if f 1 , f 2 ∈ C ∞ ((t)) and n ≥ 0, we have the following product rule: For more details on hyperderivatives, we refer the reader to [8], [15] and [20]. The next proposition is useful to deduce our results relating to the hyperderivatives. The t-module G n . We start with the definition of t-modules and then analyze the tensor product of certain t-modules which takes our interest throughout the paper. For further details on t-modules and their tensor products, we refer the reader to [9], [25], [26] and [29]. (i) A t-module of dimension d is a tuple G := (G d a/L , ψ) where G d a/L is the d-dimensional additive algebraic group over L and ψ is an F q -linear ring homomorphism ψ : where R is a subring of L, then we say G is defined over R. (ii) Morphisms between t-modules G = (G d 1 a/L , ψ 1 ) and G ′ = (G d 2 a/L , ψ 2 ) are given by any element Ψ ∈ Mat d 2 ×d 1 (C ∞ )[τ ] satisfying Ψψ 1 (θ) = ψ 2 (θ)Ψ. We also denote the category of t-modules by G . Example 2.3. (i) Any Drinfeld A-module φ defined as in (2) can be considered as a t-module of dimension one defined over L. (ii) Let n be a positive integer. Another example of t-modules can be given by the Carlitz n-th tensor power C ⊗n := (G n a/K , ψ) where ψ is the F q -linear ring homomorphism given by Note that when n = 1, the definition of C ⊗1 coincides with the Carlitz module. We refer the reader to [2] for further details. Let R be an F q -algebra containing each entry of A 0 , A 1 , . . . , A m . The A-module action on Mat d×1 (R) induced by G = (G d a/L , ψ) is given as and denote such A-module by G(R). Furthermore, we set ∂ ψ (θ) := A 0 and define the Amodule action on Mat d×1 (R) via the map ∂ ψ : A → Mat d (R) so that (6) θ and denote such A-module by Lie(G)(R). When L = K and R ′ is any subring of C ∞ containing K ∞ , by using [19,Lem. 1.7], the A-module action induced by ∂ ψ as in (6) can be uniquely extended to a K ∞ -vector space action on Mat d×1 (R ′ ) via the map ∂ ψ : x for any f ∈ K ∞ and x ∈ Mat d×1 (R ′ ). We denote such K ∞ -module by Lie(G)(R ′ ). One can assign an exponential series Exp G ∈ Mat d (L) [[τ ]] to any t-module G given by subject to the condition that Exp G ∂ ψ (θ) = ψ(θ) Exp G . The exponential series Exp G induces to an everywhere convergent and vector valued F q -linear homomorphism Exp G : The logarithm series Log G ∈ Mat d (L) [[τ ]] of G which is the formal inverse of Exp G is given by with respect to the condition Similar to the exponential series, the logarithm series Log G induces to an F q -linear homomorphism Log G : D → Lie(G)(C ∞ ) defined by where D is the domain of convergence of Log G in G(C ∞ ) (see [26,Lem. 2.5.4] for more details on D). We now fix a positive integer n and the Drinfeld A-module φ of rank 2 defined by We introduce the t-module G n = (G 2n+1 a/K , φ n ), constructed from φ and C ⊗n , where φ n is the F q -linear ring homomorphism φ n : A → Mat 2n+1 (K)[τ ] given by (10) φ n (θ) = θ Id 2n+1 +N + Eτ such that N ∈ Mat 2n+1 (F q ) and E ∈ Mat 2n+1 (A) are defined as Proof. By the F q -linearity of the action ∂ φn on A and hyperdifferential operators with respect to θ, it is enough to prove the lemma for a = θ i for i ∈ Z ≥0 . We do induction on i. If i = 0, then we are done. Assume that the assumption holds for all i. Note that for positive integers i and j such that i ≥ j we have Using (11), we obtain Using the fact that φ n is an F q -linear ring homomorphism, we obtain ∂ φn (θ i+1 ) = ∂ φn (θ)∂ φn (θ i ). Thus the equality in (12) implies the assumption for i + 1 as desired. 2.3. Effective t-motives over K. We define K[t] to be the commutative polynomial ring consisting of polynomials in t with coefficients in K and K(t) to be its quotient field. For any Definition 2.6. (i) An effective t-motive M defined over K is a left K[t, τ ]-module which is free and finitely generated over K[t] such that the determinant of the matrix representing the τ -action on M with respect to any chosen Now for a given effective t-motive M which is also free and finitely generated over Thus we can define an F q -linear ring homomorphism Φ : We call the t-module (G d a/K , Φ) formed via this process an abelian t-module corresponding to M. By [37, Thm. 1] (see also [34,Thm. 10.8]), we know that there is an anti-equivalence of categories between the subcategory of effective t-motives over K which are also finitely generated over K[τ ] and the category of abelian t-modules defined over K. We now see some examples of such correspondence between effective t-motives and abelian t-modules. (i) Let φ be the Drinfeld A-module of rank 2 given as in (9). We set a left It is the effective t-motive corresponding to φ. One can also easily see that The r-th exterior power of M is called the determinant of M and is denoted by det(M). One can easily prove that det(M) is an effective t-motive of rank 1 over K[t]. For instance, let φ be the Drinfeld A-module of rank 2 defined as in (9). (iii) Let n be a non-negative integer. We now define the left K[t, τ ]-module C ⊗n := K[t]m whose τ -action is given by . It is free of rank one with the K[t]-basis {m} and free of rank n over K[τ ] with the basis {m, (t − θ)m, . . . , (t − θ) n−1 m}. One can see that the abelian t-module corresponding to C ⊗n is given by C ⊗n defined in Example 2.3(ii). When n = 1, we also set C := C ⊗1 . C ⊗n on which τ acts diagonally. Using Example 2.7, we see that M n is an effective t-motive which is free of rank 2 over -basis for M n and hence M n is free of finite rank over K[τ ]. Moreover, the following identities hold: Thus we see that the multiplication by t on M n is given by which shows that G n = (G 2n+1 a/K , φ n ) is the t-module corresponding to M n . Hence G n is an abelian t-module. Let w be a monic irreducible polynomial in A and A w be the completion of A at w. Let M be an effective t-motive over K and set be the set of elements of I fixed by the action of τ . We define the A w -module [37,Prop. 1]). Let ρ = (ρ w ) be the family of homomorphisms ρ w : Gal(K sep /K) → GL(V w (M)) induced by the action of Gal(K sep /K) on V w (M). The next theorem is due to Gardeyn (see also [37,Prop. 2]). (ii) The family ρ = (ρ w ) forms a strictly compatible system. Throughout the present paper, we call ρ = (ρ w ) in Theorem 2.8 the family of representations attached to the effective t-motive M. Let M 1 and M 2 be effective t-motives defined over K. The tensor product showed that for sufficiently large n, Hom(M 1 , M 2 ⊗ C ⊗n ) induces the structure of an effective t-motive whose K[τ ]-module structure can be described in what follows. For Note that after the extension of scalars, we have . ) be the representation matrix of f i,j with respect to m 1 and m 2 . Then we define τ · f i,j : M 1 → M 2 ⊗ C ⊗n to be the K(t)-module homomorphism whose representation matrix with respect to m 1 and m ′ 2 := {m 2,1 ⊗ 1, . . . , m 2,s 2 ⊗ 1} is given by We are now ready to give the definition of Taelman t-motives. where M is an effective t-motive defined over K and n ∈ Z. (i) It is important to point out that for effective t-motives M 1 and M 2 , the canonical isomorphism actually shows that the definition of morphisms between the objects of T is independent of n. (ii) The category M of effective t-motives can be embedded into T as a subcategory via the fully faithful functor M → (M, 0) and by the abuse of notation, we continue to denote the image of M under this functor by the same notation. For any Taelman t-motive M 1 := (M 1 , i 1 ) and M 2 := (M 2 , i 2 ), we define We define the internal hom in T by where i ∈ Z ≥0 is sufficiently large. For an effective t-motive M, we have the natural isomorphism between M ⊗ C j ⊗ C and M ⊗ C j+1 for any j ≥ 0 which implies that by Definition 2.9(ii). Moreover for sufficiently large i and M 1 , M 2 ∈ M, we have Thus one can show that the definition of the internal hom above is actually independent of i and well-defined up to isomorphism of Taelman t-motives. (i) For any positive integer n, consider the effective t-motive C ⊗n . One can easily show that Hom(C ⊗n , C ⊗n ) ∼ = 1. In other words, (C ⊗n ) ∨ = Hom(C ⊗n , 1) can be identified by (1, −n) in T . We also note that by (14), C ⊗n can be also identified by (1, n) ∼ = (C ⊗(n−i) , i) for any i ∈ {0, . . . , n − 1}. Furthermore, using (13), one can see that (ii) Letφ be the Drinfeld A-module given byφ θ = θ − ab −1 τ + b −1 τ 2 such that a ∈ F q and b ∈ F × q . Using the K[τ ]-module structure on Hom(Mφ, C), one can see that where the equality follows from the previous example and (13). Moreover, since Mφ ⊗ (−b −1 1) ∼ = M φ , using (13), one further sees that Taelman provided that both sides of the identity converge. As our first example, for any s ∈ Z, we define the L-series L(1, s) corresponding to the trivial Taelman t-motive 1 by where the product runs over irreducible elements in A + and it converges for any positive integer s (see [24,Sec. 8]). In this subsection and the rest of the paper, for any positive integer n, we are mainly interested in the L-function L(M n , ·) of the effective t-motive M n defined in §2.3 corresponding to G n given in (10). Let ρ = (ρ w ) be the family of homomorphisms ρ w : We know by Theorem 2.8 that ρ indeed forms a strictly compatible system and hence one can define the L-function L(M n , ·) := L((M n , 0), ·) = L U ′ (ρ, ·) as in (1). We recall that the values of our L-function converge in K ∞ simply after replacing the variable t with θ as explained in §1.2. Furthermore one can check that the exceptional set U ′ of primes of A + in this case is empty. Since M n ∼ = (M ′ , m) for some effective t-motive M ′ and m ∈ Z, by using (18), one can recover values of L(M ∨ n , s) in terms of L(M ′ , s) whenever they are convergent. Indeed by [38,Prop. 8], we know that L(M ∨ n , s) converges to an element in K ∞ for any integer s ≥ 0. Before we finish this subsection, we introduce the Taelman L-value corresponding to an abelian t-module G = (G d a/K , ψ) which plays a fundamental role to prove our main result. We refer the reader to [19] and [38] for further details. For any finite A-module M, we set which is the characteristic polynomial of the map θ ⊗ 1 on M evaluated at X = θ. and v ∈ A + be a prime. We define the matrix B : We define Lie(G)(A/vA) to be the direct sum (A/vA) d of d-copies of A/vA equipped with the A-module action given by Similarly, we define G(A/wA) as (A/wA) d with the A-module action given by Now following [19], we define the Taelman L-value L(G/A) by the infinite product where v runs over all irreducible elements in A + . The Analysis on the Logarithm series Log Gn We fix a positive integer n and a Drinfeld A-module φ of rank 2 given by φ θ = θ + aτ + bτ 2 where a ∈ A and b ∈ A\{0} unless otherwise stated. We recall the definition of the t-module G n = (G 2n+1 a/K , φ n ) from (10) and denote its logarithm series Log Gn by In this section, we analyze the coefficients P i and using Papanikolas' method in [32, Sec Then the (2n)-th and (2n + 1)-st row of P i are given by R i,0 and R i,1 respectively. Note that N m = 0 when m ≥ n + 1 and N j−m = 0 if j − m ≥ n + 1. Therefore we see that where the last equality follows from setting l = j − m + 1. Since the last two rows of N contain only zeros, one can notice from the direct calculation that the multiplication N m P i−1 E (i−1) N l−1 has no contribution to the last two rows of P i if m ≥ 1. Thus we only consider the case when m = 0. Observe that where the only non-zero elements occur in the 2(l − 1) + 1-st and 2(l − 1) + 2-nd coordinates of the last two rows. We also mention that when l = n + 1, the non-zero terms appear only in the last coordinate of the last two rows which are actually the terms corresponding to 2(l − 1) + 1-st coordinate when n = l. Thus, applying (19) together with above observation finishes the proof. be the logarithm series of φ defined so that γ 0 = 1 and Recall that the logarithm series log C of the Carlitz module is defined by log We also recall the definition of shadowed partitions and elements F i for all i ≥ 0 from §1.3 and prove the following proposition. Using Lemma 3.1 and Proposition 3.2, we prove the following. Corollary 3.3. Let 1 ≤ k ≤ n. For any i ≥ 1, the last row of P i is given by , and the (2n)-th row of P i is given by Proof. We do induction on i. Note that if i = 1, then Lemma 3.1 shows that the last row and the (2n)-th row of P 1 are given by respectively. By using (20), we see that γ 1 = a/(θ − θ q ) = a/L 1 which implies that the induction hypothesis holds for i = 1. Assume that it holds for all i. We show that the hypothesis holds for i + 1. Observe that for any 1 ≤ k ≤ n + 1, using the functional equation (20), we have Similarly for 1 ≤ k ≤ n, we also obtain Thus, using Lemma 3.1, (21) and (22), we obtain that the induction hypothesis holds for the last row. For the (2n)-th row, by using the similar calculations above replacing γ i−1 with F i−1 and γ i with F i and applying Proposition 3.2 we also deduce that the latter statement of the corollary holds. By definition, for i ≥ 1, we see that F i is of the form where x j , n j , y j ∈ Z ≥0 for 1 ≤ j ≤ r. Recall that t is an independent variable over C ∞ . Then for each F i ∈ K of the form (23), we set, F i (t) := a q x 1 b y 1 (t − θ q n 11 ) . . . (t − θ q n 1k 1 ) + · · · + a q xr b yr (t − θ q n r1 ) . . . (t − θ q n rkr ) and observe thatF i (θ) = F i . Furthermore, we defineT i (t) in a similar way using the definition of T i in §1.3 so thatT i (θ) = T i and for each i ∈ Z ≥0 , set Υ i (t) := aF i (t) +T i (t). It is now easy to notice that Υ i (θ) = γ i . Let L 0 (t) := 1 and for i ≥ 1, we define the deformation L i (t) of elements L i by From the definitions, we have L i (θ) = L i . Finally, define P 0 (t) := Id 2n+1 and for all i ≥ 1 and 1 ≤ k ≤ n, we set The next proposition will be useful to deduce some facts about the domain of convergence of Log Gn . Proposition 3.4. For any i ≥ 0, we have that P i (θ) = P i . Proof. Assume first that i ≥ 1. Using (4), we can obtaiñ Note also that we have Thus, combining (24) and (25) we obtaiñ On the other hand, again by using (4), we also have By the functional equation (20), we see that Similarly, we also have By Proposition 3.2, similar calculation as in (28) and (29) also gives (30) and (31) b Moreover, by definition, we have Finally, evaluating both sides of (26) and (27) at t = θ together with using (28), (29), (30), (31) and (32), we see that which implies that the matrix P i (θ) satisfies the same functional equation (8) as P i does. Since such a solution is unique, we conclude that P i (θ) = P i for i ≥ 1. Note that when i = 0, the proposition follows from the definition of P 0 (t). Thus we finish the proof. We are now ready to give the main result of this section. Theorem 3.5. Let φ be the Drinfeld A-module of rank 2 defined as in (9) such that a ∈ F q and b ∈ F × q . Let G n be the t-module constructed from φ and C ⊗n as in (10). Then the logarithm function Log Gn of G n converges on the polydisc D n : Proof. For i ≥ 1, we first analyze the last two rows of P i by using Corollary 3.3. Since a, b ∈ F q , by [18,Lem. 4.1], we see that |γ i | ∞ < 1. For any 1 ≤ k ≤ n + 1 we have = q q i (−k+n+1−nq/(q−1))+nq/(q−1) . (34) Similar estimation can be also made for the elements [i] n+1−k F i /L n i and [i] n−k b q i−1 F i−1 /L n i by using (33) and (34) respectively. Thus by Corollary 3.3, we see that the norm of any element in one of the odd (resp. even) entries in (2n)-th or the last row of P i is bounded by the right hand side of (33) (resp. (34)). By Proposition 3.4 and the definition of L i (t), we see that for 0 ≤ l, m ≤ n, we have and for 1 ≤ j ≤ n, we obtain Moreover again by Proposition 3.4, for 0 ≤ s ≤ n − 1 and 0 ≤ r ≤ n, we have and for 1 ≤ j ≤ n, we obtain Thus, a small calculation implies that the norm of the elements in an odd entry (resp. even) of each row of P i is smaller than the bound obtained in the right hand side of (33) (resp. (34)). Now let x be an element in D. Then the bound on the norm of P i implies that q q i (−k+n+1−nq/(q−1))+nq/(q−1) → 0 as i → ∞. Since x ∈ D n is arbitrary, the function Log Gn converges on D n . Remark 3.6. Let G n = (G 2n+1 a/K , φ n ) be the abelian t-module as in the statement of Theorem 3.5. For a fixed choice of (q − 1)-st root of −b −1 , set γ := (−b −1 ) 1/(q−1) and letG n = (G 2n+1 a/K ,φ n ) be the t-module given byφ n (θ) = γ −1 φ n (θ)γ. It is easy to check by using the functional equation (8) that LogG n = γ −1 Log Gn γ. In other words, ifP i is the i-th coefficient of LogG n , then we haveP i = (−1) i b −i P i for all i ≥ 0. Therefore one can also obtain similar results for the logarithm series coefficients ofG n and hence sees that the function LogG n also converges on D n . Class and Unit modules We fix an abelian t-module G n which is constructed as the tensor product of the n-th tensor power of the Carlitz module and a Drinfeld A-module φ of rank 2 given by φ θ = θ + aτ + bτ 2 such that a ∈ F q and b ∈ F × q . In this section, our aim is to prove some properties of the class and unit module of G n . For more general description of class and unit modules, we refer the reader to [6], [19] and [38]. For 1 ≤ i ≤ 2n + 1, let e i ∈ Mat (2n+1)×1 (F q ) be such that the i-th coordinate of e i is 1 and the rest is equal to 0. Proof. Suppose that there exist elements a 1 , . . . , a 2n+1 in A such that By Lemma 2.5, the equality in (35) is equivalent to Thus we obtain that a i = 0 for 1 ≤ i ≤ 2n + 1 recursively. This implies that the set {e 1 , . . . , e 2n+1 } is A-linearly independent. Using the first equality in (36), one can show that the same set also spans the A-module Lie(G n )(A) and we leave the details to the reader. Let v ∞ (·) be the valuation corresponding to the norm |·| ∞ normalized so that v ∞ (θ) = −1. Consider the F q -module We have the following decomposition of F q -modules: Recall that Log Gn = i≥0 P i τ i is the logarithm series of G n . Proof. By Theorem 3.5, we see that e i is in the domain of convergence of Log Gn for 1 ≤ i ≤ 2n + 1. Assume to the contrary that there exist a 1 , . . . a 2n+1 ∈ A not all zero satisfying and let T := max 1≤i≤2n+1 {deg θ (a i )}. By using Proposition 3.4 and a simple calculation on the valuation of the coefficients of Log Gn , we see that for any k ≥ 1 and j ∈ {1, . . . , 2n + 1}, P k e j ∈ m. Since all entries of P k for k ≥ 1 has valuation bigger than 0, if we set g i := ∞ k=1 P k e i , then we see that g i ∈ m. Now dividing both sides of (38) by θ T and using the fact that P 0 = Id 2n+1 , Lemma 2.5 yields where a iT 's are the θ T -th coefficient of a i 's some of which may possibly be zero and a ′ i = a i /θ T − a iT . We also note that v ∞ (a ′ i ) ≥ 1 for 1 ≤ i ≤ 2n + 1. Thus by comparing both sides of (39), we see that 2n+1 i=1 a iT e i + g * = [0, . . . , 0] tr for some g * ∈ m. By the decomposition of Lie(G n )(K ∞ ) in (37) But it is only possible if a iT = 0 for all i. Using the similar argument for the remaining coefficients of a i , inductively, we can show that a i = 0 for all i. But this is a contradiction with the assumption on a i 's. Thus the set {λ 1 , . . . , λ 2n+1 } is A-linearly independent. Let G be an abelian t-module and consider the exponential function Exp G : Lie(G)(C ∞ ) → G(C ∞ ) of G. The class module H(G/A) of G is the A-module given by the quotient . We prove the following proposition. Proof. By Theorem 3.5, we have that the set m is in the domain of convergence of Log Gn . Since Log Gn is the formal inverse of Exp Gn , the image of Exp Gn contains m. Thus by (37) we have that Definition 4.4. Let V be a finite dimensional K ∞ -vector space. We say that an A-module M ⊂ V is an A-lattice in V if it is free and finitely generated over A such that the map M ⊗ A K ∞ → V is an isomorphism. (i) An invertible A-lattice in K ∞ is a tuple (J, α) consisting of a finitely generated and locally free of rank one A-module J and an isomorphism α : J ⊗ A K ∞ → K ∞ of K ∞ -modules. (ii) Let Id K∞ be the identity map on K ∞ . We say (J 1 , α 1 ) and (J 2 , α 2 ) are equivalent whenever there exists an isomorphism g : J 1 → J 2 of A-modules satisfying One can obtain that the relation between invertible A-lattices stated in Definition 4.6(ii) is an equivalence relation and we denote the set of equivalence classes of invertible A-lattices in K ∞ by Pic(A, K ∞ ). Given two finitely generated locally free A-modulesJ 1 andJ 2 , we can construct an invertible A-lattice J as follows: Let d 1 and d 2 be the rank ofJ 1 andJ 2 over A and α :J 1 ⊗ A K ∞ →J 2 ⊗ A K ∞ be an isomorphism of K ∞ -modules. For i = 1, 2, we define det A (J i ) to be the d i -th exterior power ∧ d iJ i ofJ i . Since A is a Dedekind domain, we see that Hom A (det A (J 1 ), det A (J 2 )) is a finitely generated and locally free A-module of rank one. Moreover the tuple J := (Hom A (det A (J 1 ), det A (J 2 )),α) is an element of Pic(A, K ∞ ) wherẽ We call an element g = j≤j 0 c j θ j ∈ K × ∞ monic if the leading coefficient c j 0 ∈ F × q is equal to 1. The monic generator of the A-module Hom A (det A (J 1 ), det A (J 2 )) is denoted by [J 1 :J 2 ] A . Note that Proposition 4.7 actually implies that v(Hom A (det We continue with an observation due to Anglès and Tavares Ribeiro [6, Sec. 2]. Let V ′ 1 and Using the homomorphism v : Pic(A, K ∞ ) → Q, Debry also proved the following. Definition 4.9. Assume that G is an abelian t-module. We define the unit module U(G/A) corresponding to G by By [19, Thm. 1.10], we know that Lie(G)(A) and U(G/A) are A-lattices in Lie(G)(K ∞ ). Since A is a Dedekind domain, Lie(G)(A) and U(G/A) are also locally free A-modules. The following proposition is useful to determine the generators of the unit module U(G n /A). Proof. We first note that the inclusion of Λ in Lie(G)(K ∞ ) induces an isomorphism Note also that the tuple (Hom A (det A (U(G/A)), det A (Λ)),α) is an element in Pic(A, K ∞ ) whereα : Hom A (det Calculating the valuation of both sides of (41), we see that We set to be the A-module generated by λ i 's, where λ i is as in Proposition 4.2 whose A-module structure is induced by ∂ φn . By Proposition 4.2, we observe that Λ is an A-submodule of U(G n /A) which is free of rank 2n + 1 and therefore has the same rank as Lie(G n )(A) by Lemma 4.1. It is also locally free as it is a torsion free module over the Dedekind domain A. Moreover one can note that the element [Lie(G n )(A) : Λ] A ∈ K × ∞ can be given by the determinant of the matrix Π ∈ Mat 2n+1 (K ∞ ) whose i-th column having the coordinates of λ i = Log Gn (e i ). Since the idea of the proof of Theorem 3.5 implies that the entries of P i for i ≥ 1 has valuation bigger than 0, we obtain [Lie (i) We have where λ i = Log Gn (e i ) for all 1 ≤ i ≤ 2n + 1. (ii) LetG n = (G 2n+1 a/K ,φ n ) be the t-module defined as in Remark 3.6. ThenG n is an abelian t-module. Furthermore we have Proof. The first part follows from the previous discussion. For the second part, we first show thatG n is an abelian t-module. Let M ′ be the Taelman t-motive defined as where M 0 is the effective t-motive K[t]m with K[t]-basis {m} and whose τ -action is given by τ · fm = −bf (1) is indeed an effective t-motive. By a similar calculation as in §2.3, we see that M ′ is free of rank 2n + 1 over K[τ ] and the t-module corresponding to M ′ is given byG n = (G 2n+1 a/K ,φ n ) which implies thatG n is an abelian t-module. By Remark 3.6, we know that the logarithm coefficients ofG n are F × q -multiple of the logarithm coefficients of G n . Therefore one can obtain by using the idea of the proof of Proposition 4.2 that the set {λ 1 , . . . ,λ 2n+1 } is Alinearly independent in Lie(G n )(K ∞ ). On the other hand, sinceφ n (θ) = γ −1 φ n (θ)γ where γ = (−b −1 ) 1/(q−1) , by using the same idea in the proof of Lemma 4.1 that Lie(G n )(A) is free of rank 2n + 1 generated by e i for i = 1, . . . , 2n + 1. Now the second part follows by using Proposition 4.10 and obtaining the formula in a similar way used to deduce (42). The proof of the main result Throughout this section, we let the t-module G n = (G 2n+1 a/K , φ n ) be constructed from φ given by φ θ = θ + aτ + bτ 2 such that a ∈ F q and b ∈ F × q and C ⊗n as in (10). 5.1. The dual t-motive of G n . Before giving the definition of the dual t-motive of G n , we require further setup. Assume that L is a perfect field and is an extension of K in C ∞ . We define the non-commutative polynomial ring L[σ] with the condition For k, m ∈ Z ≥1 , let Mat k×m (L)[τ ] be the set of polynomials of τ with coefficients in Mat k×m (L). For any g = g 0 + g 1 τ + · · · + g l τ l ∈ Mat k×m (L)[τ ], we further define g * ∈ Mat m×k (L)[σ] by g * := g tr 0 + (g In his unpublished notes, Anderson proved that the category H of Anderson dual t-motives over L is equivalent to the category G of t-modules defined over L (see [26,Sec. 2.5] and [9,Sec. 4.4] for more details). In the above definition, we see that one can correspond a t-module to a dual t-motive. We now describe how we can relate a dual t-motive to a t-module. It is easy to observe that the kernel of δ 0 (resp. δ 1 ) is equal to σL[σ] (resp. (σ − 1)L[σ]). By a slight abuse of notation, we further denote the maps δ 0 , δ 1 : Mat 1×d (L)[σ] → Mat d×1 (L) given by δ 0 ((f 1 , . . . , f d )) = (δ 0 (f 1 ), . . . , δ 0 (f d )) tr and δ 1 ((f 1 , . . Our aim is to describe the dual t-motive corresponding to the t-module G n = (G a/L , φ n ). Let H n be the left L[t, σ]-module given by By the definition of the σ-action on H n , we easily see that H n is the dual t-motive with the We consider an element Thus we obtain tf = (θa n + a n−1 )(t − θ) n h 1 + (θb n−1 + b n−2 )(t − θ) n−1 h 2 + (θa n−1 + a n−2 )(t − θ) n−1 h 1 + · · · + (θb 1 + b 0 )(t − θ)h 2 + (θa 1 + a 0 )(t − θ)h 1 + (θb 0 + a q n )h 2 + (θa 0 + aa q n + bb q n−1 )h 1 + (σ − 1) · (a q n h 2 + a q n ah 1 + b q n−1 bh 1 ). Therefore the F q -linear ring homomorphism η ′ : F q [t] → Mat 2n+1 (L)[τ ] described above is given by η ′ (t) = φ n (θ) which proves that G n is the t-module corresponding to H n under the equivalence between the categories H and G . For all i ∈ {1, . . . , 2n + 1}, we recall the definition of e i and then set e ∨ i := e tr i ∈ Mat 1×(2n+1) (F q ). Let H Gn be the dual t-motive of G n . We have that H n ∼ = H Gn as left L[t, σ]-modules. We now define a left L[t, σ]-module isomorphism ι : H n → H Gn by ι(h 1 ) = e ∨ 2n+1 and ι(h 2 ) = e ∨ 2n . Using the L[t]-module action on H n defined as in (45) and the L[t]-module action on H Gn defined as in (43), we obtain ι((t − θ) n−j h 1 ) = e ∨ 2j+1 for 0 ≤ j ≤ n and ι((t − θ) n−1−j h 2 ) = e ∨ 2(j+1) for 0 ≤ j ≤ n − 1. Next proposition is crucial to deduce our main result. for some Y (t) ∈ K[t]. From (49), one can calculate y s for 0 ≤ s ≤ n recursively using the equations Since equations in (50) to determine y s are also used to determine the coefficient of (t − θ) s in the Taylor expansion of (t − θ q ) −(n+1) at t = θ, we see that for j = 0, 1. Using (45) and our assumption on the elements a and b, we also observe that Thus by Proposition 2.1, Proposition 5.2 and (51), we have On the other hand, by (45), we also have (t − θ) n+1 h 1 = σh 2 + aσh 1 = σ(h 2 + ah 1 ). Thus similar to the calculation of ϕ 1 (h 2 ), we now obtain Theorem 5.3. For any positive integer n satisfying 2n + 1 ≤ q, we have . Proof. We first prove the inclusion ⊇. To do this we need to find c i,j ∈ K ∞ for i, j ∈ {1, 2} so that Note that by the properties of hyperderivatives (see [32,Lem. 2.3.23]) and Proposition 2.1 we have Thus, (52) implies that Therefore, using (4), we see that for all 1 ≤ m ≤ n. We now choose c 1,2 = b −1 (θ − θ q ) n and c 1,1 = 0. Thus by (53), one can obtain Similarly, if we choose c 2,2 = −ab −1 (θ − θ q ) n and c 2,1 = (θ − θ q ) n+1 , we see that Thus, we have W ⊇ K ∞ · e 2n ⊕ K ∞ · e 2n+1 . On the other hand, note that since the matrices ∂ φn (c i,j ) = d n [c i,j ] has non-zero determinant when c i,j = 0, by using (54), we obtain ϕ 1 (h 2 ) = d n [c 1,2 ] −1 e 2n+1 . Thus, by multiplying both sides of (55) with d n [c 2,1 ] −1 , we can obtain ϕ 1 (h 1 ) in terms of a linear combination of e 2n and e 2n+1 . Thus we have the desired inclusion W ⊆ K ∞ ·e 2n ⊕K ∞ ·e 2n+1 . It also implies that ϕ 1 (h 1 ) and ϕ 1 (h 2 ) are K ∞ -linearly independent. The isomorphism of the K ∞ -vector spaces in the statement of the theorem follows from the definition of G n and the details are left to the reader. We recall the t-moduleG n = (G 2n+1 a/K ,φ n ) defined as in Remark 3.6. To prove the main result of the paper, we need some further analysis onG n and its dual t-motive. σ ·h 2 = −b(t − θ) n+1h 1 + a(t − θ) nh 2 . By a straightforward modification of the calculations in the present section, one can obtain thatH n is the dual t-motive corresponding toG n . Furthermore, we have ϕ 1 (h 1 ) = −b −1 ϕ 1 (h 1 ) and ϕ 1 (h 2 ) = −b −1 ϕ 1 (h 2 ). Since b ∈ F × q , by Theorem 5.3, one can obtaiñ (i) LetW be the K ∞ -vector space as in (56) and let LogG n = i≥0P i τ i be the logarithm series of the t-moduleG n . Then for any natural number n satisfying 2n + 1 ≤ q, the K ∞ -vector space (via the action of ∂φ n ) generated byP i τ i (x) for all i ≥ 1 and any x ∈ Lie(G n )(C ∞ ) is contained inW . We continue with letting φ to be the Drinfeld A-module of rank 2 given by φ θ = θ+aτ +bτ 2 such that a ∈ F q and b ∈ F × q and recall that M φ is the effective t-motive corresponding to φ introduced as in Example 2.7(i). We have by [38,Rem. 5] (see also [12,Sec. 2]) that (57) L(M ∨ φ , 0) = L(φ/A). Using Goss' results [24, Sec. 5.6] on abelian t-modules and applying the theory of Hom shtukas developed in [31,Sec. 8,12] on a suitable shtuka model ofG n , one can obtain that L(M ∨ Gn , 0) = L(G n /A). We also refer the reader to [4] for another approach using the dual t-motiveH n ofG n as well as [5,Sec. 4.1] and the references therein for more details. Remark 5.6. We briefly explain how one can obtain the value of L(M φ , n) at n = 1. Letφ be the Drinfeld A-module given in Example 2.11(ii). Consider the F q -vector space By [18,Cor. 4.2], we know that logφ converges on m ′ . Thus, in a similar way to the proof of Proposition 4.3, we get H(φ/A) = {0}. Moreover, again by using [18,Cor. 4.2], we see that logφ converges at 1. This implies that logφ(1) ∈ U(φ/A). Since U(φ/A) is an A-lattice in K ∞ , using the minimality of the norm of logφ(1) among the elements of U(φ/A), we see that U(φ/A) = logφ(1)A. Thus by [38,Thm. 1], we have L(φ/A) = logφ(1). By using Example 2.11(ii), (18) and (57) We finish this subsection with the following proposition. 5.4. Proof of Theorem 5.9. Using Theorem 5.4, we prove the next proposition. Proof. By Theorem 4.11(ii), we obtain U(G n /A) ∩W ⊂ ⊕ 2n+1 i=1 A · LogG n (e i ). By Theorem 5.4(i), we see thatP i e 2n ∈W for i ≥ 1. Since the vectors e 2n and e 2n+1 are inW andW is a finite dimensional normed vector space, the sum e 2n+j + ∞ i=1P i e 2n+j = LogG n (e 2n+j ) is also inW for j = 0, 1. By Theorem 5.4(ii), we know that U(G n /A) ∩W is an Alattice inW and therefore is a free A-module of rank two by Remark 4.5. Thus we obtain U(G n /A) ∩W = A · LogG n (e 2n ) ⊕ A · LogG n (e 2n+1 ) as desired. The latter equality in the statement of the proposition can be obtained similarly. Now we are ready to state our main result. Theorem 5.9. Let φ be the Drinfeld A-module defined as in (9) such that a ∈ F q and b ∈ F × q . Then for any positive integer n such that 2n + 1 ≤ q, we have where F i is the sum of the components of γ i corresponding to shadowed partitions in P 1 2 (i) for all i ≥ 0. Proof. For 1 ≤ i ≤ 2n + 1, letλ i = LogG n (e i ) be as in the statement of Theorem 4.11(ii). By (40) where the last equality follows from combining Corollary 3.3 with the fact thatP i = (−1) i b −i P i for i ≥ 0 and P i is the i-th coefficient of the logarithm series of G n = (G 2n+1 a/K , φ n ). Note that by Proposition 5.7, we have L(G n /A) = L(M φ , n + 1). Thus the result follows from (62).
2020-05-29T01:00:45.966Z
2020-05-28T00:00:00.000
{ "year": 2021, "sha1": "34a13427b7a9d9891cae8d3fdfca439cea4c788b", "oa_license": null, "oa_url": "https://jtnb.centre-mersenne.org/item/10.5802/jtnb.1168.pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "34a13427b7a9d9891cae8d3fdfca439cea4c788b", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
86768996
pes2o/s2orc
v3-fos-license
Patients’ age and discussion with doctors about lung cancer screening: Diminished returns of Blacks Abstract Objective As age is one of the main risk factors for lung cancer, older adults are expected to receive more messages regarding lung cancer screening (LCS). It is, however, unclear whether age similarly increases patients’ chance of discussing LCS across various racial groups. We aimed to determine racial differences in the effect of patients’ age on patient‐physician discussion about LCS. Methods This cross‐sectional study borrowed data from the Health Information National Trends Survey 5 (HINTS 2017), which included 2277 adults. Patients’ demographic factors, socioeconomic characteristics, smoking status, possible LCS indication, and patient‐physician discussion about LCS were measured. We ran logistic regression models for data analysis. Results Independent of possible LCS indication, older patients were more likely to have a patient‐physician discussion about LCS. However, there was a significant interaction between race and age, suggesting a larger effect of age on the likelihood of discussing LCS with doctors for Whites than Blacks. In race‐stratified models that controlled for possible LCS indication, higher age increased lung cancer discussion for Whites but not for Blacks. Conclusion Whether age increases the chance of discussing LCS or not depends on the patient's race, with Blacks receiving fewer messages regarding LCS as a result of their aging. tomography screening. 7 Following this large clinical trial, multiple cancer-related organizations, including the American College of Chest Physicians, the US Preventive Services Task Force, and the American College of Radiology, issued their recommendations for lung cancer screening (LCS) of high-risk individuals using low-dose computed tomography imaging. 8,9 Finally, in February 2015, the Center for Medicare and Medicaid Services (CMS) approved coverage for LCS of high-risk beneficiaries using low-dose computed tomography. 10 Considering the importance of age and smoking as two major risk factors of lung cancer, CMS defined eligible high-risk beneficiaries as individuals aged 55-77 years who have a smoking history of at least 30 pack-years and currently smoke or have quit within the past 15 years. 10 Although age is supposed to increase the likelihood of a patientdoctor conversation about LCS, several other factors may prevent these discussions from occurring. From the patient side, older age is also associated with risk of poverty, 11 abuse, neglect, 12 cognitive decline, 13 memory loss, 13 social isolation, 14 and transportation difficulties, 15 all of which may reduce the chance for receiving an LCS discussion. On top of these factors, research has shown that despite higher risk of lung cancer, older individuals may discount such risk. In a recent study on a nationally representative sample of US adults, regardless of lung cancer risk, older age (despite increasing the actual risk of cancer) was associated with less cancer perceived risk and worries. 16 Given that race influences quality of health care, the effect of patients' age on the opportunity to discuss LCS with physicians may be different for racial and ethnic groups. 17 Race is a major determinant of cancer mortality. 18 Despite their higher risk and mortality of lung cancer, 19 Black individuals are less likely than White individuals to perceive high levels of cancer risk. 16,20 In data collected from National Lung Screening Trial participants, Whites had higher cancer perceived risk than Blacks. 21 In addition, Blacks are less likely to qualify for LCS, despite coverage provided through the Affordable Care Act. 22 Some studies have documented racial disparities in LCS participation, with Blacks having a lower chance to receive LCS than Whites. 23 All these factors, in addition to high rate of poverty, 24 low trust toward the health-care system, 25,26 and low quality of care that they receive, 27 contribute to a relative disadvantage of Blacks compared to Whites regarding lung cancer outcomes. 18 To better understand the reasons behind racial disparities in LCS, we compared Black and White patients for the association between age and having a discussion with a doctor about LCS. To generate generalizable results, we used data with a nationally representative sample of US adults. | Design and setting This was a cross-sectional study that used data from the Health Information National Trends Survey 5 (HINTS-5) Cycle 1, 2017. 28 The HINTS is a US nationally representative survey that has been periodically administered by the National Cancer Institute since 2003. The purpose of the HINTS is to provide data for researchers to better depict the national picture of cancer information among US adults. 29 The data collection period for the HINTS-5-Cycle 1 was January 2017 through May 2017. | Ethics The Westat's Institutional Review Board (IRB) approved the HINTS-5 study. Westat's Federal Wide Assurance (FWA) number is FWA00005551 and Westat's IRB number is 00000695. This project used to have an OMB number (0920-0589). The HINTS study was exempted from IRB review by the National Institutes of Health Office of Human Subjects. All HINTS-5 participants provided informed consent. Non-institutionalized US adults (aged ≥18 years) living in the United States are the HINTS target population. The HINTS-5, Cycle 1 used a two-step sampling design to make sure that the final sample was nationally representative. The first step was a stratified sample of residential addresses that were derived from all residential addresses received from the Marketing Systems Group. In the second step, one adult from each household was selected to participate in this study. The sampling frame was grouped into two strata based on concentration of minorities: Stratum # 1, areas with high minority concentration; and Stratum # 2, areas with low minority concentration. Addresses were drawn from each sampling stratum using equal-probability sampling. 29 | Survey information The surveys were mailed to the targeted participants. To encourage study participation, a monetary incentive was included in the mail. Two toll-free telephone numbers (for English calls and Spanish calls) were provided to respondents. The overall response rate in HINTS-5 was 32.4%. 29 | Study variables The study variables used for analysis included patient's age, race, gender, education attainment, smoking status, health insurance status, and LCS discussion with doctor. | Demographic factors Demographic factors included in analysis were race, ethnicity, age, and gender. Race was considered as a dichotomous variable (0 = White, 1 = Black). Gender was also a dichotomous variable (0 = female, 1 = male). Patients' age was considered a continuous measure (range: 18-101 years). | Smoking status Ever smoker status was measured using the following survey question: "Have you smoked at least 100 cigarettes in your entire life?" with yes or no as response options. Patients were also asked this question regarding their smoking habit: "How often do you now smoke cigarettes?" with response options of: 1 = Every day, 2 = Some days, and 3 = Not at all. Current smoker status was assessed by being an ever smoker and admitting to smoking every day or some days. | Health insurance Having the following types of insurance was considered as being | Possible LCS indication Age 55-77 years and ever smoking status were used to divide study participants into the high-and low-risk for lung cancer groups: The high-risk group included those aged 55-77 years who were ever smokers. The low-risk group included any other participants. This grouping was based on CMS recommendations for LCS of high-risk individuals. 30 Pack-year smoking history was not documented in the HINTS data set. Therefore, we could not adjust our study cohort based on pack-year smoking. | Having a discussion with doctors about LCS The following single item was used to measure having had a discussion with doctors about LCS: "At any time in the past year, have you talked with your doctor or other health professional about having a test to check for lung cancer?" Responses were yes, no, and do not know. | Statistical analysis We used Stata 15.0 (Stata Corp.) for data analyses. For univariate analysis, we reported means and frequencies, associated with their standard errors (SEs) and 95% confidence intervals (CIs). To test the association between age and having a discussion with doctors about LCS, we used logistic regression models, controlling for demographic factors, education, and health-care access (insurance). We ran four models overall. Model 1 only included the main effects. Model 2 also included a race-by-age interaction term. Model 3 was performed in Whites. Model 4 was tested in Blacks. Odds ratio (OR), SE, 95% CI, t, and P values were reported. P < 0.05 was considered significant. | Descriptive statistics Participants had a mean age of 49 years (SE = 0.34). From all participants, 52% were females. Thirteen percent of the sample was Black. Most participants (about 92%) had some type of health insurance. | Association between age and LCS discussion in the pooled sample Based on Model 1, in the pooled sample of 2277 individuals, independent of possible LCS indication, higher age was associated with a higher chance of having had a discussion with a doctor about LCS (OR, 1.05; 95% CI, 1.02-1.07). Another factor significantly associated with having had a discussion with a doctor about LCS was being a current smoker (OR, 1.93; 95% CI, 1.17-3.18). We also found a marginally significant association between male gender and having a discussion with a doctor about LCS (P = 0.053; Table 2). Based on Model 2, age showed a negative and significant interaction with race (OR, 0.95; 95% CI, 0.91-1.00), suggesting that age has a smaller association with chance of having had a discussion with doctors about LCS for Blacks compared to Whites (Table 2). | Association between age and LCS discussion according to patient's race As shown by Model 3, in Whites, independent of possible LCS indication, older age was associated with a higher chance of having had a discussion with a doctor about LCS (OR, 1.05; 95% CI, 1.03-1.08; | D ISCUSS I ON We found that a patient's age was positively associated with having had a discussion with a physician about LCS in the whole population. The effect of patient's age on patient-physician discussion about LCS, however, was present for White but not Black patients. There was also an interaction between race and patient's age. These findings suggest two main hypotheses: First, Black patients' age more strongly increases barriers to a high-quality conver- Blacks also contribute to the existing racial disparities in lung cancer outcomes. 24 The poverty rate increases with age and is higher in Blacks as compared to Whites. 33 If Black older adults more frequently struggle with poverty, ageing may be a strong barrier for them against the chance of having a patient-physician discussion about LCS. Racial difference in the level of trust in the health-care system is another factor that may cause a relative disadvantage for Black older individuals regarding patient-physician interaction. 26 These racial differences suggest that age may have a larger effect as a barrier against chance of LCS discussion for Blacks as compared to Whites. Biases of the health-care system, which are associated with worse health outcomes, may also differently impact Black and White older adults. 34,35 Physician bias becomes a more significant problem when the physician and patient are not from one race. 36 Blacks are less likely to be race concordant with their physicians compared to Whites. 37 It has been shown that lack of patientphysician racial concordance reduces quality of doctor-patient engagement for Black patients. 38 Black patients also report lower satisfaction from their health-care visits compared to White patients. 25 However, we included all individuals aged 55-77 years with history of smoking based on the CMS guideline. HINTS data have been used for assessment of high-risk for lung cancer individuals. 16 Considering that most smokers initiate smoking prior to age 26 and that mean age was 49 years in our cohort, it is extremely probable that most of the smokers we included in our analysis were long-term smokers. 58 Among the limitations of our study is the cross-sectional nature of our data. However, large sample size, and using a national representative sample, was among the strengths of this study. | CON CLUS ION We found that, unlike Whites, Blacks are not receiving more LCS messages from their physicians as they get older. This finding is in line with the minorities' diminished returns theory, suggesting that the effects of risk and protective factors are systematically smaller for the minority group. The finding is also alarming and may contrib-
2019-03-28T13:33:55.299Z
2019-02-20T00:00:00.000
{ "year": 2019, "sha1": "4ba006e904dce9b05900de70a64bad4588a23bb8", "oa_license": "CCBYNCND", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/agm2.12053", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4ba006e904dce9b05900de70a64bad4588a23bb8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
265418434
pes2o/s2orc
v3-fos-license
Testing Zeolite and Palygorskite as a Potential Medium for Ammonium Recovery and Brewery Wastewater Treatment : Environmental pollution is an issue of particular concern, specifically when industrial waste products are not subjected to appropriate treatment. Among various industries in the agri-food sector, the brewing industry holds a significant position in this context, given that beer stands as the predominant choice of consumers. Brewery waste generates significant quantities of organic substances, along with ammonium nitrogen and phosphorus. Among the various methods for their treatment, adsorption has received substantial attention due to its cost-effectiveness and operational simplicity. The present study investigates the adsorption capacity of two materials, zeolite and palygorskite, for the removal of ammonium nitrogen and brewery waste, using columns and batches. Simultaneously, desorption and regeneration experiments were conducted, and the effect of pH on their effectiveness was also examined. To understand the adsorption mechanisms, isotherm and kinetic models have been estimated. The results of the experiments have demonstrated a marked adsorption efficiency of the adsorbent materials, surpassing 90%. In comparison, zeolite has exhibited a better adsorption capacity in the removal of ammonium nitrogen, while palygorskite has shown greater aptitude for phosphorus removal. The purpose of these experiments was to investigate the adsorption capacity of these two materials as a potential medium for brewery wastewater treatment (e.g., as part of adsorption filter, trickling filters, and constructed wetlands). Introduction One of the current concerns of humanity concerns environmental pollution, particularly in the context of industrial waste disposal into aquatic ecosystems without prior treatment [1,2].Many industries in the agro-food sector generate substantial quantities of waste and by-products, which exert a profound influence on environmental degradation, climate change, and economic disparities [3]. Among the various types of industrial waste, those emanating from the brewing industry hold notable economic significance.This significance is attributed to the global preeminence of beer consumption [1], given that it accounts for approximately 40% of total alcohol consumption [4].It is estimated that the production of 1 liter of beer results in the generation of 3-10 liters of wastewater, depending on the production process and the specific water usage [5][6][7], with global beer production calculated to be over 134 billion liters [8].The volume of beer produced in Greece amounted to four million hectoliters in 2021, an increase compared to the previous year [9]. The brewing industry generates large quantities of organic waste, mainly consisting of brewery wastewater (BWW) and spent grains or barley malt residues (BSG).BWW is characterized by a high organic load that is easily biodegradable, while BSG is a product of difficult biological decomposition due to its lignocellulosic compounds' content [3].Furthermore, these waste products exhibit high concentrations of nutrients, which are mainly In order to address these issues, environmental authorities are exerting pressure on breweries to ensure that their wastewater abides with environmental standards [8].Numerous wastewater treatment methods have been adopted for the treatment of brewery wastewater, including membrane filtration [13], anaerobic [3] and aerobic processes [14], constructed wetlands [8], and combinations of these [10].Among these methods, biological wastewater treatment has become a more attractive alternative solution, finding application in both developed and developing nations.Notably, within the spectrum of wastewater treatment options, constructed wetlands (CWs) have gained favor as an eco-friendly technology, while offering numerous benefits through the treatment of variable wastewater volumes.CW systems have been tested for treating different types of industrial wastewater [15], including those from brewery operations [8].In the CWs, wastewater purification is achieved through a comprehensive array of physical, chemical, and biological processes that occur within their components [16].However, the effectiveness of CWs in removing pollutants is predominantly subject to hydraulic variables [17,18].For instance, a lower hydraulic loading rate leads to higher pollutant contact time within the CW, favoring the settling of Total Suspended Solids (TSS) and the degradation of Chemical Oxygen Demand (COD), including the high absorption and retention of nutrients [19], improving the quality of the wastewater.It should be noted that the performance of CWs is contingent upon the local climatic conditions, the vegetation, and the specific substrates employed, all of which contribute to their efficiency. Regarding nutrient removal, the removal of nitrogen (N) is associated with processes such as ammonification, nitrification, plant uptake, and ammonia adsorption [20].Conversely, the removal of phosphorus (P) is accomplished through a synergistic approach involving substrates, vegetation, and microorganisms [21].The substrate plays a vital role in mitigating N and P. Commonly used substrates are generally categorized into three types-natural materials, industrial by-products, and industrial products.Various substrates have been utilized in constructed wetlands, including gravel, clay, bentonite, zeolite, charcoal, palygorskite, and others [22]. In the selection of substrates, the primary considerations are the cost and the availability of raw materials [23].Zeolite is a natural aluminosilicate mineral with various commercial applications due to its low cost, stable crystal structure, and low density.The primary physical characteristics of zeolites that are relevant for ion exchange appli-cations [24] include porosity (%), thermal stability, ion exchange capacity (mmolM + /g), specific weight (g/cm 3 ), and apparent density (g/cm 3 ) [25].Zeolite, a natural mineral, exhibits a high absorption rate for both nitrogen (N) and phosphorus (P) due to its internal composition and spatial structure [26].Specifically, zeolite filters can significantly enhance the removal capacity of CW, with removal rates for organic matter, N, and P reaching 95%, 80%, and 70%, respectively [27]. On the other hand, palygorskite is a hydrated alumino-magnesium phyllosilicate mineral with an ideal chemical formula (Mg, Al) 5 (Si, Al) 8 O 20 (OH) 2 8H 2 O [28].Its layer charge and high surface area give palygorskite intermediate cation exchange capacity while also providing high adsorption capacity.This, combined with the mineral's elongated quality, makes it particularly useful in various industrial applications (such as drilling fluids, suspension fertilizers, environmental absorbents, etc.) [29].Similarly, palygorskite is a significant industrial mineral that is closely associated with absorption processes [30].This mineral exhibits removal rates of 90% for N and 85% for P [31]. The use of ion exchange materials, such as zeolite and palygorskite, enhances the efficiency of CWs, including their capacity to handle shock loads and operate effectively over a wider range of temperatures [32].In general, however, the adsorption capacity of adsorbent substances depends on multiple characteristics, such as their texture properties, including porosity, surface area, pore size, and surface chemistry, all of which play a critical role in adsorption performance.Furthermore, the adsorption capacity is subject to modulation by an array of factors, including the pH of the solution, ionic concentration, temperature, initial concentration of adsorbates, as well as the duration of contact [2]. In this study, the adsorption capacity of zeolite and palygorskite for treatment of brewery wastewater was investigated, as substrates media of a constructed wetland system.Specifically, research was conducted using columns and batch tests to remove ammonia ions from artificial wastewater.An equivalent set of experiments was conducted for wastewater from the Macedonian Thrace Brewery, known as "Vergina".In tandem with these experiments, desorption and regeneration tests were carried out, and the impact of altering conditions, such as pH, was studied.Furthermore, in order to better understand the adsorption mechanisms, calculations of adsorption isotherm and kinetic models were conducted, including the Elovich model, the intraparticle diffusion (ID) model, the double constant (DC) model, the pseudo-first (PFO) model, and the pseudo-second (PSO) model. Materials and Methods In order to investigate the ability of zeolite and palygorskite to adsorb ammonium and treat brewery wastewater, a series of experiments was conducted.As a first step, batch experiments using ammonium aqueous solutions were performed in order to define adsorption kinetics and isotherms and to examine the effect on adsorption of ammonium feeding concentration, material size and pH.The next step was to conduct batch column experiments, based on the batch experiments results, using firstly ammonium aqueous solutions and then brewery wastewater.During batch column experiments, adsorption kinetics and isotherms were defined for ammonium, ortho-phosphate and COD.All batch and column experiments were followed by desorption experiments, to investigate the ability of the materials to release the adsorbed ammonium, phosphorus and organic matter. Artificial Wastewater and Brewery Wastewater Ammonium chloride (NH 4 Cl) was used in the experiments in order to produce artificial wastewater (AWW).After NH 4 Cl was dried at 104 • C, it was diluted in deionized water leading to several concentrations of NH 4 + -N, so as to conduct the adsorption and desorption studies in batch and column experiments.In addition, apart from the experiments with artificial wastewater, column experiments were also conducted with brewery wastewater (BWW).For the needs of this experiment, 10 L of brewery wastewater were obtained after the fermentation stage from a Macedonia Trace Brewery called "Vergina" with a capacity of approximately 7000 t/year, which is located in Rodopi.Then, the wastewater was transported to the laboratory in plastic bottles and was used immediately to prevent any change in its composition.The brewery's wastewater characteristics are presented in Table 2. Adsorbents: Zeolite and Palygorskite Characterization The natural zeolite employed in this study originated from Bulgaria (Imerys Minerals Bulgaria AD, Kardzhali, Bulgaria) and subsequently imported into Greece by ZeolifeTM Company, Tessaloniki, Greece.Specifically, this material can be described as a hydrated aluminosilicate mineral of volcanic origin, with a clinoptilolite content of approximately 85%.It contains a moisture content ranging from 10 to 12%, while the residual 3 to 5% is comprised of impurities, including feldspar, micas, and clays that are devoid of fibers and quartz.The chemical composition of the dry clinoptilolite is detailed in Table 3.According to the technical data sheet provided, this particular zeolite exhibits a minimum cation exchange capacity of 150 meq/100 g.In preparation for all experimental procedures, the zeolite was sieved to obtain different granulometries (i.e., 1-2, 2-4 and >4 mm), washed to remove dust, and finally dried.Fibrous palygorskite of natural origin, extracted from the Ventzia Basin (Grevena, Greece), was provided by Geohellas S.A. (Athens, Greece) in granular form, albeit with non-uniform sizing.Geohellas S.A. conducted a chemical composition analysis of the palygorskite, and the results are presented in Table 3.The palygorskite possessed a cation exchange capacity (CEC) of 30 meq/100 g [33].The palygorskite was sieved to obtain different granulometries (i.e., 1-2, 2-4 and >4 mm), washed to remove dust, and finally dried at 105 • C for 24 h before use. Batch Studies For both adsorbents (zeolite and palygorskite), batch tests were carried out in 250 mL borosilicate bottles filled with 100 mL of AWW at several concentrations (5, 10, 50, 100 and 200 ppm) and 5 g of adsorbent of different particle sizes (i.e., 1-2, 2-4, >4 mm).Bottles were continuously stirred at 150 rpm and samples were taken at regular intervals until equilibrium was achieved.The volume sampled at each time interval during desorption was minimal (~1.5 mL) in order to avoid any change in the solid/liquid ratio.The maximum amount of ammonium adsorbed per mass of adsorbent at equilibrium (q e ) was determined.In addition, after evaluating the impact of granulometry in the NH 4 + -N removal, adsorbents were used as purchased without any sieving.Again, the batch test procedure was the same but only one concentration of NH 4 + -N was applied (50 ppm).In addition, the influence of pH was examined for both adsorbents at 5 ppm NH 4 + -N initial concentration. Column Studies Based upon batch experiments, further adsorption experiments were also carried out in laboratory-scale columns.The adsorption experiments on fixed-bed reactors were conducted on Plexiglas tubes (columns), 50.0 cm high and of 4.0 cm internal diameter with approximately 450 mL operational volume, equipped with sample valves placed at the bottom of the columns (Figure 1).The columns were firstly filled with approximately 400-450 g of zeolite and fed with 400 mL of AWW at concentrations of 200 and 5000 mg NH 4 + -N/L.Then the columns operated with palygorskite (~240-340 g) and were filled with AWW (~400 mL) at various NH 4 + -N concentrations (200, 1000, 1500, 3000 and 5000 mg/L).Samples were collected at regular intervals from the lower sample valve.Generally, in order to achieve complete adsorbent saturation, the columns were fed with an aqueous solution of ammonium chloride of concentration 5000 mg/L.Finally, column studies were conducted with brewery wastewater (BWW) testing both adsorbents (zeolite and palygorskite), following the same procedure as in the experiments with AWW.Specifically, one column was filled with approximately 400 g of zeolite and another with approximately 200 g of palygorskite.Then 440-470 mL of brewery wastewater was added to the columns.Time interval between samplings were the same as in the column studies with AWW.Apart from NH4 + N, other pollutants were also analyzed, including total soluble COD (SCODt) and ortho phosphates (OP).The determination of the concentrations of the above pollutants wa carried out according to the Standard Methods [34].To be more precise, the determinatio of NH4 + -N was executed following the Salicylate Method (4500-NH3 B-E), while the meas Then 440-470 mL of brewery wastewater was added to the columns.Time intervals between samplings were the same as in the column studies with AWW.Apart from NH 4 + -N, other pollutants were also analyzed, including total soluble COD ( S COD t ) and orthophosphates (OP).The determination of the concentrations of the above pollutants was carried out according to the Standard Methods [34].To be more precise, the determination of NH 4 + -N was executed following the Salicylate Method (4500-NH 3 B-E), while the measurement of OP was carried out in accordance with the Cassiterite Method (4500-P D).Furthermore, the physico-chemical parameters were assessed using portable instruments, specifically the EcoSense pH100A (YSI Incorporated, Yellow Springs, OH, USA) and EcoSense EC300A (YSI Incorporated, Yellow Springs, OH, USA). Adsorption Kinetics Adsorption experiments were conducted in order to determine the time and to which extent NH 4 + -N, S COD t and OP are adsorbed on the substrate material (equilibrium), and from which point saturation of binding sites occurs.The experimental data collected from the adsorption tests were fitted to several kinetic models, such as the pseudo-first order, pseudo-second order, Elovich, intraparticle diffusion, and the double-constant model.Details regarding the applied adsorption kinetic models are included in the Appendix A (Appendix A.1). Isothermal Adsorption Adsorption isotherms are used to describe the interactive behavior between solutes and adsorbents, which express the surface properties and affinity of the adsorbent.The experimental data were fitted to four isotherm models: Langmuir, Freundlich, Temkin and Henry models, in order to evaluate which better describes them.Information regarding the isotherm models is shown in the Appendix A (Appendix A.2). Desorption and Regeneration Experiments Zeolite and palygorskite can be used in agriculture for soil fertilization, since they provide slow release of nutrients, which enhance plant growth and, eventually, high crop yield [35].Therefore, after the conduction of batch and column tests, desorption and regeneration tests were conducted on two consecutive days using deionized water (desorption test) and 0.1 N HCl (regeneration test), respectively, in order to evaluate the adsorbent's desorption capacity and possible use as a soil additive.The choice of solvents for desorption and regeneration tests was guided by the existing literature [34,35].The aim of desorption with deionized water is to evaluate the ability of the adsorbent particles to release ions with the flow of a natural solvent, thereby preventing the introduction of a new element and consequently avoiding the contamination of the area with a new chemical product [36].On the other hand, research has indicated that the use of HCl 0.1 N is the most effective eluent for regenerating these materials [37].The saturated adsorbent was washed with distilled water and then dried at 45 • C for use in the desorption tests.In batch tests, following the same procedure as the adsorption tests, 100 mL of solvent (deionized water or 0.1 N HCl) were used for the desorption and regeneration experiments with AWW.These experiments were performed for all granulometries and initial NH 4 + -N concentrations.In column experiments, 350-500 mL of solvent were added to each column for both experiments, with AWW and BWW.Again, experiments were performed for all the applied initial concentrations of NH 4 + -N (AWW and BWW) and s COD t and PO 4 −3 -P (BWW).Samples were collected at regular intervals for the same period of time applied in the batch and column studies.The desorption and regeneration studies lasted 1440 min until constant values of pollutant concentrations would be observed. Effect of Particle Size and Initial Concentration Firstly, the effect of particle size was examined for AWW of several initial concentrations (5, 10, 50, 100 and 200 mg/L NH 4 + -N).These initial concentrations were chosen to be comparable with previous related studies [35].Table 4 illustrates the removal efficiencies of both adsorbents at the several initial concentrations.It can be concluded that there are no great differences between the granulometries (1-2 mm, 2-4 mm and >4 mm) (Figure 2).This was also stated by other researchers in previous studies [35].On the other hand, Muscarella et al. [38] found that grain adsorption ability of zeolite increases by decreasing the particles' diameter.In addition, for each grain size tested, ammonium removal occurred rapidly within the first 60 min of contact time, with removal efficiencies ranging from 91.57 to 100%.Similar results were reported in another study [39], where the zeolite's finest particle size presented the fastest and highest ammonium adsorption during the first hour, while past that time the removal rate reduced.However, after 60 min, higher particle sizes demonstrated higher removal capacity, indicating that in smaller particle sizes the available active sites are taken more quickly and therefore saturation is reached in less time.This was explained by the fact that the concentration of available sites is constant; therefore, the availability of active sites is nearly the same between smaller and larger particles.Since the diffusion paths of ions are shorter in smaller particles, the available sites are more accessible to ions.Therefore, smaller particle size is expected to favor kinetics mostly in the early stages of adsorption [39,40].In the current study, after the first hour, the removal rate gradually decreased and finally reached equilibrium at approximately 120 min.In higher concentrations, the removal efficiency reduced slightly, probably due to the adsorbents' ammonium capacities.Similarly, Gianni et al. [41] found that removal efficiency reached almost 100% for the lowest initial concentration, while it decreased when higher initial concentrations were applied.On the whole, removal efficiencies for both palygorskite and zeolite were higher than 90% for all applied initial concentrations.In addition, among both adsorbents zeolite was the most efficient, with slightly higher removal efficiencies in comparison to palygorskite regarding ammonium removal (Table 4).This can be attributed to the fact that zeolite illustrates higher selectivity for NH4 + -N cations due to the presence of alkaline earth metal cations on its negatively charged surface, exchanged readily with NH4 + -N cations in the AWW [42]. Figure 3 illustrates the effect of initial concentration on the adsorbents' performance It can be concluded that increased concentrations of initial ammonium lead also to higher adsorbed amounts on the adsorbent.This was also observed by Widiastuti et al. [43], who reported that higher initial concentrations of ammonium provided greater driving force Probably, the ammonium ion could migrate to the internal micro-pores from the external surface and therefore could exchange with cations on both the external and internal surfaces.Similarly, Gianni et al. [41] found that removal efficiency reached almost 100% for the lowest initial concentration, while it decreased when higher initial concentrations were applied.On the whole, removal efficiencies for both palygorskite and zeolite were higher than 90% for all applied initial concentrations.In addition, among both adsorbents, zeolite was the most efficient, with slightly higher removal efficiencies in comparison to palygorskite regarding ammonium removal (Table 4).This can be attributed to the fact that zeolite illustrates higher selectivity for NH 4 + -N cations due to the presence of alkaline earth metal cations on its negatively charged surface, exchanged readily with NH 4 + -N cations in the AWW [42]. Figure 3 illustrates the effect of initial concentration on the adsorbents' performance.It can be concluded that increased concentrations of initial ammonium lead also to higher adsorbed amounts on the adsorbent.This was also observed by Widiastuti et al. [43], who reported that higher initial concentrations of ammonium provided greater driving force.Probably, the ammonium ion could migrate to the internal micro-pores from the external surface and therefore could exchange with cations on both the external and internal surfaces.Consequently, since there wasn't any great difference in ammonium removal for various granulometries, in the pH batch tests and in the column studies both adsorbents were used with their commercial granulometry (<5 mm and D50 = 150 μm for zeolite and <4 mm and D50 = 75 μm for palygorskite). Equilibrium Time In order to investigate the influence of contact time, batch tests were performed with palygorskite and zeolite as purchased with 50 mg/L NH4 + -N (initial concentration).The removal efficiency in both adsorbents was very fast initially, with 92.9% and 96.2% achieved in the first 60 min for palygorskite and zeolite, respectively (Figure 4).Then, the rate was lower with increased contact time and remained almost constant until reaching equilibrium at approximately 180 min.After 400 min, ammonium removal reached up to 94.5% and 99.1% in palygorskite and zeolite, respectively.Consequently, since there wasn't any great difference in ammonium removal for various granulometries, in the pH batch tests and in the column studies both adsorbents were used with their commercial granulometry (<5 mm and D 50 = 150 µm for zeolite and <4 mm and D 50 = 75 µm for palygorskite). Equilibrium Time In order to investigate the influence of contact time, batch tests were performed with palygorskite and zeolite as purchased with 50 mg/L NH 4 + -N (initial concentration).The removal efficiency in both adsorbents was very fast initially, with 92.9% and 96.2% achieved in the first 60 min for palygorskite and zeolite, respectively (Figure 4).Then, the rate was lower with increased contact time and remained almost constant until reaching equilibrium at approximately 180 min.After 400 min, ammonium removal reached up to 94.5% and 99.1% in palygorskite and zeolite, respectively.Similar results were found also in previous studies [35,[43][44][45].According to Martin et al. [45], ammonium removal by zeolite occurred within the first 180 min, and the removal rate decreased with increased contact time.Alshameri et al. [44] stated that ammonium removal surpassed 78% within 300 min, contact time higher than that observed in the current study, but then removal rate became constant.In addition, Kotoulas et al. [35] found that ammonium removal reached 85% within the first hour and eventually the removal rate became constant.However, removal rate reached the value of 99% at 2880 min, while in the current study 99% removal was achieved at 400 min.The high removal rate within the first hour (Figure 4) can be explained by the fact that available adsorbing sites are rapidly utilized followed by fast diffusion into its pores and channels, and finally equilibrium is reached [35].Regarding palygorskite, Alshameri et al. [44] found that removal efficiency of NH4 + -N increased rapidly and reached equilibrium very quickly at 30 min.This may also be attributed to the utilization of available sites and eventually equilibrium attainment.Fast adsorption kinetics makes these adsorbents very popular in terms of efficiency and field deployment [44].However, Alshameri et al. [44] reported lower removal efficiencies (~50%) in comparison to those in this study.Gianni et al. [41] investigated ammonium removal with palygorskite and found that optimal contact time was 15 min.After 15 min they found that removal decreased, probably due to potential NH4 + -N desorption, since the available active sites were taken and the agitation may break the bonding between ammonium and the active sites.Once again, removal efficiencies were lower (≤30%) in comparison to the current study.Similar results were found by Widiastuti et al. [43], where ammonium removal by zeolite was fast within the first 15min but then the removal rate begun decreasing till equilibrium was reached. Effect of pH The pH value of the solution is a key parameter for the removal of several pollutants, including ammonium [41,43].Electrostatic interactions between adsorbents and adsorbate ions are strongly pH dependent [42,46].The influence of pH on adsorption was evaluated though batch experiments for both adsorbents.Batch tests were conducted at three values Similar results were found also in previous studies [35,[43][44][45].According to Martin et al. [45], ammonium removal by zeolite occurred within the first 180 min, and the removal rate decreased with increased contact time.Alshameri et al. [44] stated that ammonium removal surpassed 78% within 300 min, contact time higher than that observed in the current study, but then removal rate became constant.In addition, Kotoulas et al. [35] found that ammonium removal reached 85% within the first hour and eventually the removal rate became constant.However, removal rate reached the value of 99% at 2880 min, while in the current study 99% removal was achieved at 400 min.The high removal rate within the first hour (Figure 4) can be explained by the fact that available adsorbing sites are rapidly utilized followed by fast diffusion into its pores and channels, and finally equilibrium is reached [35].Regarding palygorskite, Alshameri et al. [44] found that removal efficiency of NH 4 + -N increased rapidly and reached equilibrium very quickly at 30 min.This may also be attributed to the utilization of available sites and eventually equilibrium attainment.Fast adsorption kinetics makes these adsorbents very popular in terms of efficiency and field deployment [44].However, Alshameri et al. [44] reported lower removal efficiencies (~50%) in comparison to those in this study.Gianni et al. [41] investigated ammonium removal with palygorskite and found that optimal contact time was 15 min.After 15 min they found that removal decreased, probably due to potential NH 4 + -N desorption, since the available active sites were taken and the agitation may break the bonding between ammonium and the active sites.Once again, removal efficiencies were lower (≤30%) in comparison to the current study.Similar results were found by Widiastuti et al. [43], where ammonium removal by zeolite was fast within the first 15 min but then the removal rate begun decreasing till equilibrium was reached. Effect of pH The pH value of the solution is a key parameter for the removal of several pollutants, including ammonium [41,43].Electrostatic interactions between adsorbents and adsorbate ions are strongly pH dependent [42,46].The influence of pH on adsorption was evaluated though batch experiments for both adsorbents.Batch tests were conducted at three values of pH, 3 (acid), 7 (neutral) and 9 (alkaline), while NH 4 + -N initial concentration was constant at 5 mg/L.Figures 5 and 6 illustrate the results pH effect on palygorskite (pal) and zeolite (zeo) performance both in terms of contact time (Figure 6) and pH value at equilibrium (Figure 5).Rozic [47] stated that, besides the adsorption mechanism, ion exchange is also a main mechanism of ammonium removal by zeolite.Ammonium in aqueous solution ca be found in two forms, i.e., as a dissociated form NH 4 + or as a non-dissociated form NH 3 (ammonia) [43].In addition, according to Layva-Ramos et al. [48] and Genethliou et al. [39], the ammonium ion NH 4 + is the predominant species at pH lower than 7, while at greater pH (alkaline) the NH 4 + -NH 3 equilibria shifts towards NH 3 .Sundhararasu et al. [46] also avoided alkaline pH in their adsorption experiments with zeolite, since NH 4 + ions tend to evaporate as ammonia in high pH values.In addition, at pH values above 8, NH 4 + is neutralized by hydroxyl groups, while below 6, H + and NH 4 + compete against each other for adsorption on available active sites of the adsorbent [41,49].Regarding palygorskite, optimum results were obtained at the highest pH 9 and decreased at lower pH values, with lowest performance at pH 3 (Figure 6).Similar results were published by Gianni et al. [41], who reported that palygorskite under acidic conditions is protonated.Low removal capacity of palygorskite at low pH values can be attributed to the fact that ion exchange between the H + and NH 4 + is not favored (there is potential competition between them for the exchange sites on the adsorbent).On the other hand, higher pH values (above 7) result in a negative charge of the adsorbent's surface, conditions that favor the adsorption of cationic NH 4 + through electrostatic attraction [42,44].These findings are in absolute agreement with the results of the current study (Figure 6).et al. [46] also avoided alkaline pH in their adsorption experiments with zeolite, since NH4 + ions tend to evaporate as ammonia in high pH values.In addition, at pH values above 8, NH4 + is neutralized by hydroxyl groups, while below 6, H + and NH4 + compete against each other for adsorption on available active sites of the adsorbent [41,49].Regarding palygorskite, optimum results were obtained at the highest pH 9 and decreased at lower pH values, with lowest performance at pH 3 (Figure 6).Similar results were published by Gianni et al. [41], who reported that palygorskite under acidic conditions is protonated.Low removal capacity of palygorskite at low pH values can be attributed to the fact that ion exchange between the H + and NH4 + is not favored (there is potential competition between them for the exchange sites on the adsorbent).On the other hand, higher pH values (above 7) result in a negative charge of the adsorbent's surface, conditions that favor the adsorption of cationic NH4 + through electrostatic attraction [42,44].These findings are in absolute agreement with the results of the current study (Figure 6). Regarding zeolite, the performance changed when pH varied from 3 to 9. Overall, at the lowest pH, the ammonium removal was highest and further increase of pH lowered the removal efficiency (Figure 5).This implies that lower pH intensifies the adsorption and ion exchange, while at higher pH values NH4 + ions are transformed to aqueous NH3, which demonstrates low ion exchange ability.In addition, it can be observed that within the first 150 min the removal efficiency at pH 7 is higher in comparison to that at pH 3, while past 200 min the zeolite's performance is better at pH 3, reaching equilibrium.The performance of zeolite at pH 9 was the lowest, a fact attributed to the low ion exchange ability of NH3 as the predominant species in an alkaline environment.Partially similar results were reported by previous studies [39,43].Specifically, Widiastuti et al. [43] reported that optimum adsorption takes place in a pH range between 4 and 8, which is consistent with the results of the current study.Genethliou et al. [39] found that pH had little effect on ammonium removal at pH values in the range from 6 to 8, while zeolite's performance dropped significantly at pH values above 8.On the other hand, Saltalı et al. [50] found an optimal pH 8, which is in disagreement with previous studies but also with the current study.Regarding zeolite, the performance changed when pH varied from 3 to 9. Overall, at the lowest pH, the ammonium removal was highest and further increase of pH lowered the removal efficiency (Figure 5).This implies that lower pH intensifies the adsorption and ion exchange, while at higher pH values NH 4 + ions are transformed to aqueous NH 3, which demonstrates low ion exchange ability.In addition, it can be observed that within the first 150 min the removal efficiency at pH 7 is higher in comparison to that at pH 3, while past 200 min the zeolite's performance is better at pH 3, reaching equilibrium.The performance of zeolite at pH 9 was the lowest, a fact attributed to the low ion exchange ability of NH 3 as the predominant species in an alkaline environment.Partially similar results were reported by previous studies [39,43].Specifically, Widiastuti et al. [43] reported that optimum adsorption takes place in a pH range between 4 and 8, which is consistent with the results of the current study.Genethliou et al. [39] found that pH had little effect on ammonium removal at pH values in the range from 6 to 8, while zeolite's performance dropped significantly at pH values above 8.On the other hand, Saltalı et al. [50] found an optimal pH 8, which is in disagreement with previous studies but also with the current study. Batch Kinetic Models The knowledge of adsorption kinetics is of critical importance to the design of practical treatment systems and the field deployment of adsorbents in general [43,44].Kinetics of NH4 + -N adsorption on both adsorbents were analyzed using several linear and nonlinear modes, i.e., the Elovich model, the intraparticle diffusion (ID) model, the double constant (DC) model, the pseudo-first (PFO) model and the pseudo-second (PSO) model.The use of these kinetic models may help identify the mechanism of adsorption, which depends on the physico-chemical characteristics of the adsorbents and the mass transport process.The values of adsorption capacity at equilibrium (qe), measured experimentally and calculated from the applied models were compared.The fitting results of the five applied models and the relative kinetic parameters are summarized in Table 5 and illustrated in Figure 7. Batch Kinetic Models The knowledge of adsorption kinetics is of critical importance to the design of practical treatment systems and the field deployment of adsorbents in general [43,44].Kinetics of NH 4 + -N adsorption on both adsorbents were analyzed using several linear and nonlinear modes, i.e., the Elovich model, the intraparticle diffusion (ID) model, the double constant (DC) model, the pseudo-first (PFO) model and the pseudo-second (PSO) model.The use of these kinetic models may help identify the mechanism of adsorption, which depends on the physico-chemical characteristics of the adsorbents and the mass transport process.The values of adsorption capacity at equilibrium (q e ), measured experimentally and calculated from the applied models were compared.The fitting results of the five applied models and the relative kinetic parameters are summarized in Table 5 and illustrated in Figure 7.According to Table 5, the classification of all kinetic modes used to fit the data is PSO > DC > Elovich > ID > PFO for palygorskite and DC > Elovich > ID > PSO > PFO for zeolite.That means that the adsorption mechanism might be different when comparing these adsorbents.The experimental adsorbed amount at equilibrium (qe,exp) was 0.952 mg/g and 0.983 mg/g in palygorskite and zeolite, respectively.Regarding palygorskite, the results indicate that the experimental data fitted best the pseudo-second order kinetic model (R 2 = 0.686).In addition, qe, which is a significant parameter for adsorption mechanism, was overestimated in almost all kinetic models, apart from PFO, where it was underestimated (Table 5).The qe from the PSO plot is correlated with the experimental qe, expressing the adsorbents' removal capacity adequately.Although the correlation coefficient was generally low at all models applied (0.359 < R 2 < 0.686) for palygorskite, they confirm the findings presented by Alshameri et al. [44] and Gianni et al. [41], whose results had the best fit to the PSO kinetic model, suggesting that NН4 + -N uptake on the palygorskite was controlled mostly by chemisorption process. Regarding zeolite, the experimental data fitted best to the DC and Elovich model (R 2 = 0.902 and 0.901, respectively).This result is also highlighted by the fact that the calculated adsorption capacity of both kinetic models is very close to the experimental adsorption capacity (Table 5).The good agreement with the Elovich model indicates that the According to Table 5, the classification of all kinetic modes used to fit the data is PSO > DC > Elovich > ID > PFO for palygorskite and DC > Elovich > ID > PSO > PFO for zeolite.That means that the adsorption mechanism might be different when comparing these adsorbents.The experimental adsorbed amount at equilibrium (q e,exp ) was 0.952 mg/g and 0.983 mg/g in palygorskite and zeolite, respectively.Regarding palygorskite, the results indicate that the experimental data fitted best the pseudo-second order kinetic model (R 2 = 0.686).In addition, q e , which is a significant parameter for adsorption mechanism, was overestimated in almost all kinetic models, apart from PFO, where it was underestimated (Table 5).The q e from the PSO plot is correlated with the experimental q e , expressing the adsorbents' removal capacity adequately.Although the correlation coefficient was generally low at all models applied (0.359 < R 2 < 0.686) for palygorskite, they confirm the findings presented by Alshameri et al. [44] and Gianni et al. [41], whose results had the best fit to the PSO kinetic model, suggesting that NH 4 + -N uptake on the palygorskite was controlled mostly by chemisorption process. Regarding zeolite, the experimental data fitted best to the DC and Elovich model (R 2 = 0.902 and 0.901, respectively).This result is also highlighted by the fact that the calculated adsorption capacity of both kinetic models is very close to the experimental adsorption capacity (Table 5).The good agreement with the Elovich model indicates that the activation energy varies greatly during the sorption process [51].In addition, the conformity of the ammonium adsorption to the Elovich model suggests that the rates of NH 4 + -N exchange were governed by a heterogeneous diffusion process.However, these results are not in agreement with previous studies [39,43,52], where the PSO kinetic model fitted best to the experimental data.However, it should be noted that previous researchers applied less kinetic models (PFO and PSO) in comparison to the current study (four kinetic models applied).Consequently, applying more kinetic models may give different and more trustworthy results.Overall, it can be assumed that three steps are involved in the adsorption kinetics [44,[53][54][55].Based on the adsorption models' fitting, it is confirmed that the adsorption mainly uses readily available adsorption sites, which results in a rapid diffusion and quick equilibrium of the adsorbent, while after saturation the adsorbed ions move into the adsorbent places of the adsorbent [44].These include the diffusion of ammonium ions from liquid-phase to liquid-solid interface, the movement of ions from liquid-solid interface to solid surfaces and finally the diffusion of ions into the particle pores. Batch Adsorption Isotherms An adsorption isotherm is very important for designing an adsorption system (in terms of adsorbents' performance and optimization) and includes several constants related to the surface characteristics, affinity of the adsorbent and adsorption capacity [39,43].The models were applied for the adsorbents' removal efficiency for 5, 10, 50, 100 and 200 mg/L NH 4 + -N and for several granulometries (1-2 mm, 2-4 mm and >4 mm).The fitting curves with the experimental data are presented in Figure 8 and the respective model parameters are listed in Table 6.Since there were no significant differences between the granulometries, Figure 8 illustrates the applied isotherm models only for granulometry >4 mm. adsorption kinetics [44,[53][54][55].Based on the adsorption models' fitting, it is confirmed that the adsorption mainly uses readily available adsorption sites, which results in a rapid diffusion and quick equilibrium of the adsorbent, while after saturation the adsorbed ions move into the adsorbent places of the adsorbent [44].These include the diffusion of ammonium ions from liquid-phase to liquid-solid interface, the movement of ions from liquid-solid interface to solid surfaces and finally the diffusion of ions into the particle pores. Batch Adsorption Isotherms An adsorption isotherm is very important for designing an adsorption system (in terms of adsorbents' performance and optimization) and includes several constants related to the surface characteristics, affinity of the adsorbent and adsorption capacity [39,43].The models were applied for the adsorbents' removal efficiency for 5, 10, 50, 100 and 200 mg/L NH4 + -N and for several granulometries (1-2 mm, 2-4 mm and >4 mm).The fitting curves with the experimental data are presented in Figure 8 and the respective model parameters are listed in Table 6.Since there were no significant differences between the granulometries, Figure 8 illustrates the applied isotherm models only for granulometry >4 mm.Table 6.Isotherm parameters for ammonium adsorption onto the adsorbents tested in batch studies. Parameters Palygorskite Zeolite As illustrated in Table 6, the classification of all isotherm modes used to fit the data is Freundlich > Henry > Langmuir > Temkin for palygorskite and Henry > Langmuir > Freundlich > Temkin for zeolite.It is obvious that the mechanism again might be different when comparing palygorskite and zeolite.Freundlich estimated values demonstrated a better fit for palygorskite (Figure 8a), while Henrys' model fitted best to the zeolites' results.Worth noticing is also the fact that Henry's isotherm model gave satisfactory results for both adsorbents (R 2 > 0.943, Table 6), indicating that a very simple linear form can describe the adsorption for both.Regarding palygorskite, the results do not agree with Alshameri et al. [44], who found that Langmuir isotherm gave a better fit than Freundlich, indicating a monolayer adsorption.However, it should be noted that previous researchers focused mainly on only two isotherm models, the Langmuir and Freundlich models [41,44,50,52].The application of more isotherms cast more light on the isotherm adsorption mechanism.On the other hand, Gianni et al. [41] published similar results, where the Freundlich isotherm fitted best the results highlighting the heterogeneous nature of the procedure.Concerning zeolite, the Langmuir model may have had the second-best fit with the experimental data; however, the estimated equilibrium concentrations (Q e ) were lower than the experimental (underestimation, Figure 8b, red line).Then Freundlich gave a better fitting curve (Figure 8b), meaning that the adsorption may be explained by both Henrys' and Freundlich's equation, with the second one prevailing.Previous studies are not in complete agreement with the current study, where the results were best described by the Freundlich model [35,43] or both Langmuir and Freundlich models [50,52].Again, it should be noted that in many previous studies only Langmuir and Freundlich equations were applied.Finally, Genethliou et al. [39] found a better fit to the Temkin model, indicating an uneven surface of the particles. AWW Experiments Based on the batch sorption experiments, palygorskite (pal) and zeolite (zeo) (as purchased) were tested for further sorption studies using columns.Firstly, two concentrations of ammonium were tested, 200 and 5000 mg/L.The results did not indicate any significant difference for zeolite, which demonstrated high removal capacity in both initial concentrations, and the performance of zeolite did not seem to decrease (Table 7).However, this was not the case for palygorskite, where removal efficiencies were lower.Therefore, more initial concentrations were tested with palygorskite in order to evaluate its performance.Zeolite demonstrated higher removal efficiency and lower equilibrium concentration when high ammonium concentration was applied (Table 7), while palygorskite demonstrated better performance at the lowest concentration (200 mg/L).In addition, palygorskites' performance decreased with the increase of ammonium initial concentration, while zeolite gave opposite results.Furthermore, for palygorskite it appears that, for initial concentrations above 1000, it reaches its maximum adsorption capacity in the first 100 min and after 100 min the material undergoes sorption and desorption cycles (Figure 9).However, according to Figure 9, similar adsorption rates were found for both adsorbents.In addition, a higher amount of adsorbent was used in the zeolite column study, indicating that saturation was reached and a further amount of adsorbent did not necessarily lead to higher removal capacity and adsorbents' potential. It can be concluded that in smaller ammonium concentrations equilibrium was reached sooner (~400 min for 200 mg/L), while in higher concentrations the system took more time to reach saturation (1440 min) (Figure 9).and after 100 min the material undergoes sorption and desorption cycles (Figure 9).However, according to Figure 9, similar adsorption rates were found for both adsorbents.In addition, a higher amount of adsorbent was used in the zeolite column study, indicating that saturation was reached and a further amount of adsorbent did not necessarily lead to higher removal capacity and adsorbents' potential.It can be concluded that in smaller ammonium concentrations equilibrium was reached sooner (~400 min for 200 mg/L), while in higher concentrations the system took more time to reach saturation (1440 min) (Figure 9). Column Kinetic Studies Kinetics of ammonium adsorption on both adsorbents were analyzed again as in the batch studies using the same kinetic models (Elovich, ID, DC, PFO and PSO kinetic models) in order to shed light on the adsorption mechanism.Table 8 illustrates all the fitting parameters along with the qe measured experimentally and estimated with the use of kinetic models.The applied concentrations for palygorskite were 200, 1000, 1500, 3000 and Column Kinetic Studies Kinetics of ammonium adsorption on both adsorbents were analyzed again as in the batch studies using the same kinetic models (Elovich, ID, DC, PFO and PSO kinetic models) in order to shed light on the adsorption mechanism.Table 8 illustrates all the fitting parameters along with the q e measured experimentally and estimated with the use of kinetic models.The applied concentrations for palygorskite were 200, 1000, 1500, 3000 and 5000, while for zeolite only 200 and 5000 were applied, since no decrease in performance was observed (Table 8).It is obvious from the curves in Figure 10a,b that the adsorption mechanism is different between the two tested adsorbents.This is highlighted also by the correlation coefficients (R 2 ) shown in Table 8.Specifically, regarding palygorskite, it was observed that at low ammonium concentrations the governing adsorption model was the Elovich (R 2 = 0.726), while at concentrations higher than 1000 mg/L, the adsorption was governed by the ID model where the highest correlation coefficients were calculated (Table 8).When intraparticle diffusion occurs, an adsorbate hops from one available adsorption site to another through an adsorption-desorption reaction [56].This can be produced by pore volume diffusion, surface diffusion or their combination.Note: q e,exp and q e,est are the experimental and estimated adsorbed amount at equilibrium. The Elovich kinetic model is often used to describe second-order kinetics assuming that the surface is energetically heterogeneous [57].In palygorskite, initially, at low concentrations, the main mechanism follows the Elovich model, while with high concentrations (≥1000 mg/L) the mechanism follows the ID model pattern.On the other hand, zeolite demonstrated different behavior, with better fit to the PFO kinetic model at low concentration (R 2 = 0.895, C i = 200 mg/L) and with the PSO model at higher concentration (R 2 = 0.958, C i = 5000 mg/L).In the PFO kinetic model, the reaction rate depends only on the concentration of the adsorbate and the process is controlled by physisorption.However, in the PSO kinetic model the rate-determining step is considered as a chemical reaction between the adsorbent (zeolite) and adsorbate (ammonium solution), described as chemisorption [43].This means that in zeolite, according to the above results, at lower adsorbate concentrations (200 mg/L) adsorption was governed mainly by physisorption, but when the concentration increased significantly (5000 mg/L) the mechanism changed to chemisorption, which involves the sharing of electrons or their transfer between adsorbate and adsorbent. between the adsorbent (zeolite) and adsorbate (ammonium solution), described as chem-isorption [43].This means that in zeolite, according to the above results, at lower adsorbate concentrations (200 mg/L) adsorption was governed mainly by physisorption, but when the concentration increased significantly (5000 mg/L) the mechanism changed to chemisorption, which involves the sharing of electrons or their transfer between adsorbate and adsorbent. On the whole, it can be concluded that the removal mechanism changes with the increase of the initial concentration of ammonium for both adsorbents.This result is very crucial for the selection, design and operation (conditions) of an effective adsorption system. Brewery Wastewater Experiment Apart from the experiments with artificial wastewater, column experiments were also conducted with brewery wastewater (fixed bed column experiments).In comparison to AWW, which contained only NH4 + -N as a pollutant, natural brewery wastewater contains several other components (e.g., pollutants, physico-chemical parameters, etc.) that may interact with each other, enhancing or inhibiting the columns' overall performance in terms of adsorption.Therefore, apart from NH4 + -N, sCODt and OP were also analyzed, and physico-chemical parameters (pH, Electrical Conductivity, EC) were measured.According to previous studies, fixed bed columns were more efficient than fluidized bed columns [35,58].The columns operated as fixed bed reactors and were fed with brewery wastewater.Samples were taken at specific time intervals until equilibrium to determine the kinetics of NH4 + -N, sCODt and PO4 -3 -P removal (Section 2.4).Table 9 illustrates the initial concentrations of pollutants in the brewery wastewater and the results obtained at equilibrium state.On the whole, it can be concluded that the removal mechanism changes with the increase of the initial concentration of ammonium for both adsorbents.This result is very crucial for the selection, design and operation (conditions) of an effective adsorption system. Brewery Wastewater Experiment Apart from the experiments with artificial wastewater, column experiments were also conducted with brewery wastewater (fixed bed column experiments).In comparison to AWW, which contained only NH 4 + -N as a pollutant, natural brewery wastewater contains several other components (e.g., pollutants, physico-chemical parameters, etc.) that may interact with each other, enhancing or inhibiting the columns' overall performance in terms of adsorption.Therefore, apart from NH 4 + -N, s COD t and OP were also analyzed, and physico-chemical parameters (pH, Electrical Conductivity, EC) were measured.According to previous studies, fixed bed columns were more efficient than fluidized bed columns [35,58].The columns operated as fixed bed reactors and were fed with brewery wastewater.Samples were taken at specific time intervals until equilibrium to determine the kinetics of NH 4 + -N, s COD t and PO 4 −3 -P removal (Section 2.4).Table 9 illustrates the initial concentrations of pollutants in the brewery wastewater and the results obtained at equilibrium state.Figure 11 illustrates the variation in pH and Electrical Conductivity (EC) measurements over time.It is noteworthy that a significant initial decline is marked within the first few minutes of the process for both adsorbents and parameters, followed by a stabilization of the values.Figure 12 presents the results of NH 4 + -N, s COD t and OP adsorption obtained in column kinetic experiments (under batch operation) with brewery wastewater for palygorskite (pal) and zeolite (zeo).It can be observed that both palygorskite and zeolite were able to adsorb most of the available NH 4 + -N within 400 min (>85%, Table 9, Figure 12b1), while equilibrium was reached at 1440 min.In a previous study [35], zeolite demonstrated very high NH 4 + -N removal efficiencies (>99%) within a smaller period of time (120 min). PO4 Figure 11 illustrates the variation in pH and Electrical Conductivity (EC) measurements over time.It is noteworthy that a significant initial decline is marked within the first few minutes of the process for both adsorbents and parameters, followed by a stabilization of the values.Figure 12 presents the results of NH4 + -N, sCODt and OP adsorption obtained in column kinetic experiments (under batch operation) with brewery wastewater for palygorskite (pal) and zeolite (zeo).It can be observed that both palygorskite and zeolite were able to adsorb most of the available NH4 + -N within 400 min (>85%, Table 9, Figure 12b1), while equilibrium was reached at 1440 min.In a previous study [35], zeolite demonstrated very high NH4 + -N removal efficiencies (>99%) within a smaller period of time (120 min).In terms of overall performance, palygorskite demonstrated better results in comparison to zeolite, since removal efficiencies were either similar or higher (Table 9, Figure 12).Worth noticing also is the high adsorption of phosphorus to the palygorskite substrate (>90%).The distinctive structural and adsorptive characteristics of palygorskite for the purpose of phosphorus removal are widely recognized and have found extensive application [33,59,60].In addition, as illustrated Figure 12a3, phosphorus concentration continued to decrease even after 1400 min, meaning that there are available adsorption sites even after this period of time, while adsorption of other pollutants (such as sCODt and NH4 + -N) reached equilibrium earlier.In zeolite, adsorption of PO4 -3 -P was less than that of NH4 + -N, indicating that zeolite's ability to adsorb PO4 -3 -P is limited compared to its ability to adsorb NH4 + -N [35,61].PO4 -3 -P is mainly adsorbed in zeolite through an inner-sphere complexation [35].Zeolite's law adsorption capacity for PO4 -3 -P was also reported by previous studies [35,62].Concerning sCODt removal, the columns presented the same pattern, with maximum removal reaching 44.67% and 46.30% in palygorskite and zeolite, respectively.Due to the relative short experimental time period, sCODt removal is mainly attributed to adsorption. Worth noticing is the fact that adsorbed amounts of all pollutants (NH4 + -N, sCODt, PO4 -3 -P) per gram of the adsorbent were higher in the column filled with palygorskite in comparison to the one filled with zeolite (Figure 12b1,b2,b3).This highlights the better In terms of overall performance, palygorskite demonstrated better results in comparison to zeolite, since removal efficiencies were either similar or higher (Table 9, Figure 12).Worth noticing also is the high adsorption of phosphorus to the palygorskite substrate (>90%).The distinctive structural and adsorptive characteristics of palygorskite for the purpose of phosphorus removal are widely recognized and have found extensive application [33,59,60].In addition, as illustrated Figure 12a3, phosphorus concentration continued to decrease even after 1400 min, meaning that there are available adsorption sites even after this period of time, while adsorption of other pollutants (such as s COD t and NH 4 + -N) reached equilibrium earlier.In zeolite, adsorption of PO 4 −3 -P was less than that of NH 4 + -N, indicating that zeolite's ability to adsorb PO 4 −3 -P is limited compared to its ability to adsorb NH 4 + -N [35,61].PO 4 −3 -P is mainly adsorbed in zeolite through an inner-sphere complexation [35].Zeolite's law adsorption capacity for PO 4 −3 -P was also reported by previous studies [35,62].Concerning s COD t removal, the columns presented the same pattern, with maximum removal reaching 44.67% and 46.30% in palygorskite and zeolite, respectively.Due to the relative short experimental time period, sCOD t removal is mainly attributed to adsorption. Worth noticing is the fact that adsorbed amounts of all pollutants (NH 4 + -N, s COD t , PO 4 −3 -P) per gram of the adsorbent were higher in the column filled with palygorskite in comparison to the one filled with zeolite (Figure 12(b1,b2,b3)).This highlights the better performance of palygorskite as an adsorbent material in the treatment of brewery wastewater. Desorption and Regeneration Experiments The investigation of the desorption process is very significant, since zeolite and palygorskite are useful for agricultural soil fertilization as they provide slow release of nutrients [35,39], which enhance plant growth and, eventually, high crop yield [35].The saturated adsorbents were tested for desorption in deionized water.Regeneration of the adsorbents is important since it may affect the substrate selection and process cost [39,40].The regeneration of the adsorbents was performed with 0.1 N HCl.Results of all desorption experiments are illustrated in Tables 11-13.Table 10 illustrates all the fitting parameters along with the q e measured experimentally and estimated with the use of kinetic models for the brewery wastewater test.Kinetic models were applied for all three pollutants (NH 4 + -N, s COD t , PO 4 −3 -P).It is obvious that the adsorption mechanism regarding NH 4 + -N and s COD t follows the same pattern in both adsorbent materials (palygorskite and zeolite), with governing kinetic models PSO and PFO for NH 4 + -N and s COD t , respectively (correlation coefficients R 2 , Table 10).Ammonium's adsorption mechanism is mainly described by chemisorption, including a chemical reaction between the adsorbent (zeolite) and adsorbate (ammonium solution).These results agree with those reported by Genethliou et al. [42], who stated that the formation of relatively strong bonds between the palygorskite particles and adsorbed NH 4 + -N and thus the uptake of NH 4 + -N on palygorskite were dominated by chemisorption.However, these results do not agree with those produced in the column studies testing various concentrations of ammonium (synthetic wastewater, Table 8).This may be attributed to the fact that brewery wastewater contained lower ammonium concentrations (5.61 mg/L, Table 9) in comparison to those applied in the column studies (range 200-5000 mg/L).This means that the adsorption mechanism changes with the increase of initial concentrations.Regarding s COD t , the adsorption mechanism was described best by physisorption, depending only on the concentration of the adsorbent.The same was reported by Genethliou et al. [42], who suggested that the predominant adsorbentadsorbate forces are physical.On the other hand, phosphorus adsorption did not show similar behavior between the two tested adsorbents.Specifically, the experimental data in palygorskite fitted best to the Elovich model, while in zeolite to the ID model (R 2 = 0.955 and R 2 = 0.841 for palygorskite and zeolite respectively).The results are also highlighted by the fact that the calculated adsorption capacities of kinetic models are very close to the experimental adsorption capacities (Table 10).The good agreement of phosphorus adsorption to palygorskite suggests that the mechanism was governed by a heterogeneous diffusion process. Desorption and Regeneration Experiments The investigation of the desorption process is very significant, since zeolite and palygorskite are useful for agricultural soil fertilization as they provide slow release of nutrients [35,39], which enhance plant growth and, eventually, high crop yield [35].The saturated adsorbents were tested for desorption in deionized water.Regeneration of the adsorbents is important since it may affect the substrate selection and process cost [39,40]. The regeneration of the adsorbents was performed with 0.1 N HCl.Results of all desorption experiments are illustrated in Tables 11-13.In the batch desorption tests, the results showed a remarkably slow release even after equilibrium, where the NH 4 + -N recoveries were below 6% for all granulometries and both adsorbents (Table 11), with slightly higher recoveries found at the finer granulometry (1-2 mm) and zeolite.The results of Kotoulas et al. [35] agree with the results of the current study, reporting zeolites' recovery varying from 2.77 to 4.39%, with higher values found at the lowest granulometry.In addition, Genethliou et al. [42], who found similar results, stated that NH 4 + -N uptake by palygorskite was practically irreversible (9.1 ± 0.1% and 7.8 ± 0.1% recoveries for distilled and tap water).On the contrary, Genethliou et al. [39] found that for zeolite the regeneration efficiency of NH 4 + -N was satisfactory, reaching 63.75% of the adsorbed quantity.Regarding column desorption studies, slightly higher values were observed in palygorskite (recoveries 6.96-17.77%),while in zeolite these values were again very low (recoveries 3.45 and 2.07%, Table 12).These results can be explained by the fact that the aqueous solution lacked any ion exchange capacity to release and replace ammonium anions.In order to generate used adsorbents, a strong ion exchange solution is needed [63].However, as is shown in Tables 11 and 12, although the regeneration of NH 4 + -N was higher in batch studies than in desorption studies, the recoveries were again insignificant, following a similar pattern to the desorption with deionized water.On the other hand, in column studies regarding palygorskite, the recoveries were in all cases higher than those found in batch studies (Table 12), indicating that a column system is more effective in desorption and regeneration activities.Regarding zeolite, the pattern was similar to that found in batch studies, with low NH 4 + -N recoveries.Regarding desorption column studies with BWW, recoveries of NH 4 + -N were low, 5.36 and 5.72% for palygorskite and zeolite, respectively (Table 13).In regeneration tests, NH 4 + -N recovery was significantly higher (36.02%) in palygorskite, while in zeolite it slightly increased but again remained low (15%).s COD t recoveries were generally low, with values of 21.13 and 19.99% in palygorskite and zeolite, respectively.Genethliou et al. [42] found desorption efficiencies of s COD t for tap and distilled water of 61.7 ± 0.4% and 54.0 ± 0.1%, respectively, suggesting that s COD t uptake was reversible and that the predominant forces between adsorbent-adsorbate are mainly physical.Finally, PO 4 −3 -P recoveries were also generally low (Table 13) indicating that PO 4 −3 -P uptake by both adsorbents was practically irreversible. Conclusions In this study, a sequence of experiments was undertaken to evaluate the processes taking place within the substrate of a constructed wetland system.More specifically, the investigation primarily centered on the utilization of ion exchange materials, namely zeolite and palygorskite, for the removal of ammonium from artificial wastewater and the treatment of brewery wastewater.Experimental assessments were conducted both in batch and column configurations.In batch experiments, both of the aforementioned adsorbent materials exhibited remarkable efficiency, surpassing the 90% threshold, for the removal of ammonium.In comparison, zeolite displayed higher performance in the removal of ammonium.This can be attributed to zeolite's higher selectivity for cations due to the presence of alkaline earth metal cations on its negatively charged surface, which can easily exchange with NH 4 + -N ions in the aqueous wastewater.In column experiments, the zeolite material showcased higher efficiency in NH 4 + -N removal, coupled with lower equilibrium concentrations when subjected to higher ammonium concentrations.Conversely, palygorskite demonstrated better performance when the ammonium concentrations were lower.A noteworthy observation is that both palygorskite and zeolite demonstrated the capacity to adsorb a substantial portion of the available NH 4 + -N in brewery wastewater, exceeding 85% removal rate.It is worth mentioning that palygorskite exhibited higher phosphorus removal efficiency, which can be attributed to its unique structure.To identify the adsorption mechanisms in both experiments, kinetic models, including the pseudo-first order, pseudo-second order, Elovich, intraparticle diffusion, and double-constant model, were employed.These mechanisms depend on the physico-chemical characteristics of the adsorbents and the mass transfer process, indicating different adsorption mechanisms for these materials.Isotherm models were also calculated, which are essential for designing an adsorption system and include several constants related to surface characteristics, adsorbent affinity, and adsorption capacity.Finally, desorption and recovery experiments were conducted, as these materials find applicability in agriculture due to their ability to facilitate the controlled release of nutrients. Figure 2 . Figure 2. Effect of particle size on ammonium removal efficiency (10 mg/L NH 4 + -N) for (a) palygorskite and (b) zeolite at time 400 min. Figure 4 . Figure 4. Effect of contact time for palygorskite and zeolite on ammonium removal (NH 4 + -N initial concentration = 50 mg/L). Figure 6 . Figure 6.Effect of pH over time (total 400 min) on the removal efficiencies for palygorskite (pal) and zeolite (zeo). Figure 6 . Figure 6.Effect of pH over time (total 400 min) on the removal efficiencies for palygorskite (pal) and zeolite (zeo). Figure 8 . Figure 8. Adsorption isotherms of ammonium for (a) palygorskite and (b) zeolite (granulometry >4 mm) in batch studies.The qe is the adsorption capacity and Ce is the equilibrium concentration of NH4 + -N. Figure 8 . Figure 8.Adsorption isotherms of ammonium for (a) palygorskite and (b) zeolite (granulometry >4 mm) in batch studies.The q e is the adsorption capacity and C e is the equilibrium concentration of NH 4 + -N. Figure 11 . Figure 11.(a) pH and (b) EC values in effluent brewery wastewater in column studies for palygorskite (pal) and zeolite (zeo). Figure 11 . Figure 11.(a) pH and (b) EC values in effluent brewery wastewater in column studies for palygorskite (pal) and zeolite (zeo). Table 5 . Fitting parameters of kinetic models to NH 4 + -N adsorption (initial ammonium concentration = 5 mg/L) in batch studies. Note: q e,exp and q e,est are the experimental and estimated adsorbed amount at equilibrium.Water 2023, 15, 4069 12 of 26 Water 2023, 15, x FOR PEER REVIEW 13 of 27 Table 5 . Fitting parameters of kinetic models to NH4 + -N adsorption (initial ammonium concentration = 5 mg/L) in batch studies. Table 6 . Isotherm parameters for ammonium adsorption onto the adsorbents tested in batch studies. Table 7 . NH 4 + -N removal efficiencies and equilibrium concentrations for palygorskite and zeolite in column studies. Table 7 . NH4 + -N removal efficiencies and equilibrium concentrations for palygorskite and zeolite in column studies. Table 8 . Fitting parameters of kinetic models to NH 4 + -N adsorption in column studies at various initial concentrations. Table 9 . Concentrations and removal efficiencies at equilibrium for both adsorbents at equilibrium. Table 9 . Concentrations and removal efficiencies at equilibrium for both adsorbents at equilibrium. Note: P is Palygorskite and Z is Zeolite.BDL = Below Detection Limit. Table 10 . Fitting parameters of kinetic models to NH 4 + -N, s COD t and PO 4 −3 adsorption in column studies with brewery wastewater. Table 11 . Batch Desorption results with AWW. Table 12 . Column desorption results with AWW. Table 13 . Column desorption results with BWW.
2023-11-25T16:08:53.170Z
2023-11-23T00:00:00.000
{ "year": 2023, "sha1": "74417ae75febf4c6d25a0a6e4f1c17ce628ed3a1", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4441/15/23/4069/pdf?version=1700750880", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "220f0529cddbdc2a9b783c4b8dc0c8cc6971ea6d", "s2fieldsofstudy": [ "Environmental Science", "Chemistry" ], "extfieldsofstudy": [] }
245804331
pes2o/s2orc
v3-fos-license
SOD1 mutations associated with amyotrophic lateral sclerosis analysis of variant severity Mutations in superoxide dismutase 1 gene (SOD1) are linked to amyotrophic lateral sclerosis (ALS), a neurodegenerative disorder predominantly affecting upper and lower motor neurons. The clinical phenotype of ALS shows inter- and intrafamilial heterogeneity. The aim of the study was to analyze the relations between individual SOD1 mutations and the clinical presentation using in silico methods to assess the SOD1 mutations severity. We identified SOD1 causative variants in a group of 915 prospectively tested consecutive Polish ALS patients from a neuromuscular clinical center, performed molecular modeling of mutated SOD1 proteins and in silico analysis of mutation impact on clinical phenotype and survival analysis of associations between mutations and hazard of clinical end-points. Fifteen SOD1 mutations were identified in 21.1% familial and 2.3% sporadic ALS cases. Their effects on SOD1 protein structure and functioning inferred from molecular modeling and in silico analyses correlate well with the clinical data. Molecular modeling results support the hypothesis that folding intermediates rather than mature SOD1 protein give rise to the source of cytotoxic conformations in ALS. Significant associations between type of mutation and clinical end-points were found. Amyotrophic lateral sclerosis (OMIM:105400) is a heterogeneous severe neurodegenerative disorder the hallmark of which is an adult-onset loss of upper and lower motor neurons. It leads to a progressive paresis and atrophy of skeletal muscles resulting in quadriplegia and fatal respiratory failure. Approximately 90-95% of patients do not have affected first-degree relatives and are described as sporadic cases (sporadic ALS, sALS) 1 , and ca. 10% show a familial predisposition (fALS) with Mendelian or non-Mendelian patterns of inheritance 2 . Since 1993, mutations in more than forty genes have been reported to associate with ALS, the most frequent are those in the SOD1 gene encoding the essential antioxidant enzyme Cu, Zn superoxide dismutase (http:// alsod. iop. kcl. ac. uk/) 3,4 . Coding sequences (cds) SOD1 mutations have been found in ALS patients from all over the world. However, the distribution of SOD1 mutations differs markedly even among apparently similar populations (i.e., Netherlands and Belgium, Ireland and England) and the same mutations in different populations can be associated with distinct clinical presentations. The clinical phenotype is highly variable and patients with a particular SOD1 mutation show intrafamilial differences in the severity of symptoms and the speed of disease progression 5 . Notably, it is believed that the pathogenicity of SOD1 mutations is not due to a lack of functional protein but rather to the accumulation of its misfolded aggregates 6 . It is still unclear whether all ALS-related SOD1 mutations are in fact causative, co-causative, modifying or simply accompanying variants. A prospective gene therapy targeting SOD1 expression or a pharmacotherapy aimed at elimination of misfolded SOD1 protein can only be based on a detailed understanding of the molecular mechanisms of pathogenesis of individual SOD1 mutations 7 . To address the above issues, we determined SOD1 mutations in a large group of ALS patients (n = 915) and predicted their impact on SOD1 structure and functioning using molecular modeling and prioritization Mutation screening and variant analysis. DNA was isolated from peripheral blood leukocytes using standard methods, and all exons with flanking intronic regions of the SOD1 gene were sequenced as described previously 9 . The Ensembl Variant Effect Predictor (https:// www. ensem bl. org/ Homo_ sapie ns/ Tools/ VEP) was used to annotate genomic variants 10 . Human Splicing Finder 3.0 (HSF3), EX-SKIP, and BDGP Splice Site Prediction by Neural Network webtools were used to predict the influence of detected variants on pre-mRNA splicing [11][12][13] . ConSurf (https:// consu rf. tau. ac. il/) was used to analyze amino acid sequence conservation 14 . PredictSNP (https:// losch midt. chemi. muni. cz/ predi ctsnp/) and NetDiseaseSNP (http:// www. cbs. dtu. dk/ servi ces/ NetDi sease SNP/) were used to predict the impact of mutations on the function of SOD1 protein 15,16 . Aggrescan (http:// bioinf. uab. es/ aggre scan/), TANGO (http:// tango. crg. es/) and Aggrescan3D 2.0 (http:// bioco mp. chem. uw. edu. pl/ A3D2/) software were used to predict the tendency for aggregation of mutated proteins 17-19 . Molecular modeling. The crystal structure of human SOD1 was taken from the Protein Data Bank (PDB id:2C9V) 20 . SOD1 is a compact homodimer with each subunit of 153 amino acids forming a β-barrel structure stabilized by a C57-C146 disulfide bridge and a zinc ion in the active site. The two subunits are held together by strong hydrophobic forces making SOD1 one of the most compact and stable proteins. Each subunit also contains a copper ion undergoing alternative oxidation-reduction in the course of the dismutation of O 2 ·− to O 2 and H 2 O 2 . The metal ions are bound by the side chains of H46, H48, H63, H71, H80, D83, and H120. The force field parameters and partial charges for the metal ion binding sites were calculated following earlier quantum chemical calculations on similar systems 21,22 . To impose new parameters, a patch script for the topology file was constructed to model proper interactions between the metal ions and adjacent amino acid atoms. Energy minimization and molecular dynamic (MD) simulations were performed in the NAMD program version 2.10 using all-atom force field CHARMM27 23 . The protein dimer was simulated in the TIP3P water box with dimensions of 6.2 nm × 6.2 nm × 9.5 nm, which contained 37,000 atoms in total. Four sodium ions were also included in solution to maintain neutrality of the system. The native protein as well as its 12 mutant variants were initially subjected to 10,000 steps of energy minimization and then a 10-ns MD equilibration with temperature increasing from 20 to 298 K. For each investigated system a 20-ns MD simulation was performed. All MD simulations were conducted using Langevin (stochastic) dynamics 24 used as default in the NAMD program. The friction coefficient of 5 ps −1 was used and the temperature was set to 298 K. Nonbonded interactions were dampened by employing a switching function for van der Waals and electrostatic interactions using a cutoff of 1.6 nm. All bond lengths were constrained using the SHAKE algorithm 25 , therefore a longer time step of 2 fs was applied. The modeled molecular structures were visualized YASARA Structure v. 16.1.2. Statistical analyses. Associations between SOD1 mutations carried by ALS patients and theirs clinical phenotype were studied with survival analysis methods. Four clinically relevant end-points were defined to estimate progression of the disease: wheelchair-bound (loss of walking capacity), bulbar involvement (speech or swallowing impairment), respiratory insufficiency and death (overall survival). All available information from patients' records were used to find whether the chosen end-point happened (if yes the observation was considered "complete", if no it was "censored") and what was the time from the onset of ALS symptoms (defined as the first muscle paresis) to the achievement of the selected end-point (for complete observations) or to the end of observation (for censored observations). Kaplan-Meier curves showing survival till each end-point for patients stratified according to specific SOD1 mutations or a bioinformatics parameter common for a group of mutations were compared with log-rank test for two curves or with chi-square test for more than two curves. Cox proportional hazards regression model was used to estimate associations between survival and quantitative Consurf parameter and calculate hazard ratio (HR) and its 95% confidence interval. Only mutations found in at least four subjects with available clinical data were subjected to direct comparisons. Age of ALS onset was compared between mutations with Mann-Whitney test. Site of ALS onset and clinical phenotype were compared between mutations with Fisher exact test. P < 0.05 was considered statistically significant. Statistica 13 program was used for statistical calculations. Ethics approval and consent to participate. A Clinical characteristics of the patients. Fifteen different SOD1 coding-sequence (cds) mutations were identified in 12 fALS and 20 sALS cases. Based on the medical documentation and familial anamnesis we gathered clinical data on additional 32 FALS subjects from the affected families. In total we analyzed clinical data of 64 patients with SOD1 cds mutations. Their general demographics are summarized in Table 1. Among patietns with SOD1 mutations, the classic ALS phenotype was observed in 56% of patients (both upper and lower motor neuron involvement; UMN, LMN), 41% showed progressive muscle atrophy (PMA, isolated signs of LMN), and 3% (n = 1) a mixed ALS-MSA-P (multiple system atrophy-parkinsonism) phenotype. The signs of the UMN damage included pseudobulbar syndrome, spasticity, and exaggerated reflexes and pathological signs. The LMN damage presented as muscle wasting, fasciculations, flaccid muscle tone and diminished/absent reflexes. In 88.1% of the patients the first symptoms occurred in the lower limbs, in 9.5% in the upper limbs, and in 2.4% in the bulbar-innervated muscles. The median survival was at least 84 months (mean 105.0 ± 69.4; range 12-312), while the tracheostomy-free survival was at least 36 months (mean 73.8 ± 75.8; range 11-312 months), since 17 patients were still alive at the time of analysis. Detailed clinical characteristics of the patients with an SOD1 coding-sequence mutation are presented in Table 2. Molecular modeling. The structures of the SOD1 wild-type dimer and 14 mutant proteins were subjected to 20-ns all-atom MD simulations in a water environment, which is enough to equilibrate the system and form new interactions after a mutation is introduced. A comparison of the obtained mutant structures with the WT one, revealed how the mutation affected the structure of the SOD1 dimer and also how it could alter potential interactions with other proteins (Table 3, Fig. 1, Supplementary Figs. S1-S15). Analysis of coding sequence conservation. Most of the identified mutations affected evolutionarily conserved residues of SOD1. Positions 32 and 109 are evolutionarily variable, and two other variable positions, D90 and L144, were the most frequently mutated in the analyzed group in both familial and sporadic cases. For details see Supplementary Table S2. Predicting the functional impact of mutation. To infer probable functional consequences of the SOD1 mutations we analyzed them with PredictSNP software, which is a consensus classifier of eight commonly used tools: MAPP, nsSNPAnalyzer, PANTHER, PhD-SNP, PolyPhen-1, PolyPhen-2, SIFT, and SNAP. The most common mutations identified in the analyzed group (s/fALS) were K3E, D90A and L144S predicted as neutral (accuracy 63%). No statistical association was identified for results of analysis with NetDiseaseSNP. For details see Supplementary Table S3. Analysis of alternative splicing potential. An analysis of the possible impact of the 15 mutations on alternative splicing with three web tools (HSF 3.0, EX-SKIP, and BGBD) showed inconsistent results. HSF 3.0 showed that most of the mutations could alter splicing, while BDGE found that only S105L potentially created a new acceptor site. For details see Supplementary Table S4. Aggrescan3D indicate lower aggregation propensity for region of mutation D109Y in mutated dimers (D109Y/ D109Y and D109Y/ref) compared to reference dimer. Tango, based on the physico-chemical principles of betasheet formation, extended by the assumption that the core regions of an aggregate are fully buried indicate K3E, A4V and S105L as more aggregation prone, whereas L126*, L144S as less aggregation prone comparing to WT sequence. Associations of clinical end-points with mutations and their molecular modeling results. Due to a limited number of available samples, the statistical analyses were available only for five mutation: K3E, L144S, G41S, L126*, and N139D. The overall survival differed significantly between the K3E, L144S, G41S, L126* and N139D mutation carriers (p = 0.00028, chi 2 test, Supplementary Figure S16). It was significantly shorter for G41S compared to other four mutations (p < 0.025). The survival in L144 mutation carriers was significantly longer compared to patients harboring G41S, K3E and N139D mutations (p < 0.05). Molecular predictors associated with longer survival included: neutral mutation according to PredictSNP software (p = 0.0034, Supplementary Fig. S17), neutral or mild potential consequence of mutation according to molecular modeling (p = 0.024, Supplementary Fig. S18), decrease aggregation propensity by TANGO (p = 0.012, Supplementary Fig. S19) and mutation of variable residues by Consurf (HR 0.43, 95% CI 0.23-0.81, p = 0.0086). The multivariate Cox regression model adjusted for patients' gender and age of ALS onset showed that the deleterious mutation predicted by PredictSNP software was the strongest independent predictor of death (HR 4.47, 95% CI 1.98-10.11, p = 0.00032). The time since the first ALS symptoms onset to respiratory insufficiency significantly differed between patients carrying K3E, L144S and G41S mutations (p = 0.00022, chi 2 test, Supplementary Fig. S20). It was the longest for L144S (no patient reached the end-point), the shortest for G41S (all patients reached the end-point within 2 years) and medium for K3E; the differences were significant for all 3 pairs of mutations (p < 0.02). A longer time to respiratory insufficiency was associated with neutral mutation according to PredictSNP software (p = 0.0027, Supplementary Fig. S21), not altered splicing predicted with HSF 3.0 software (p = 0.0040, Supplementary Fig. S22), neutral or mild potential consequence of mutation according to molecular modeling (p = 0.0036, Supplementary Fig. S23), decrease aggregation propensity by TANGO software (p = 0.012, Supplementary Fig. S24) Supplementary Fig. S25). Harboring the L144S was associated with a significantly longer time to bulbar involvement than K3E (p = 0.0015, log-rank test) and with borderline significance longer time than G41S (p = 0.050), without significant difference between K3E and G41S (p = 0.44). The time to bulbar involvement was significantly longer in familial compared to sporadic ALS cases (p = 0.0095, Supplementary Fig. S26). The longer time to bulbar involvement was also associated with decrease aggregation propensity by TANGO software prediction when compared to neutral or increase aggregation propensities (p = 0.00043, Supplementary Fig. S27) and with potentially not altered splicing when compared to potentially altered splicing according to HSF 3.0 (p = 0.00062, Supplementary Fig. S28). www.nature.com/scientificreports/ Time since ALS onset to a wheelchair use was significantly longer for L144S compared to K3E (p = 0.0039, Supplementary Fig. S29). A longer time to wheelchair use was also associated with a decrease aggregation propensity by TANGO software prediction, when compared to neutral or increase aggregation propensities (p = 0.0031, Supplementary Fig. S30). We have found that a more advanced age at disease onset was linked to a shorter survival (HR 1.046) with a growing risk of death of 4.6% at every additional year at onset. Further association analysis of clinical data revealed that the age of ALS onset was significantly lower in L144S compared to patients carrying the K3E mutation (45.8 ± 9.8 vs 53.3 ± 8.1 years, p = 0.019). The G41S was associated with upper limbs onset as compared to other frequent mutations (K3E, L144S, L126* and N139D, p = 0.017). The PMA phenotype was significantly more frequent among the N139D mutation carriers as compared to K3E (100% vs 19%, p = 0.021). Discussion From over 185 SOD1 mutations identified to date, only some (e.g., H46R, D90A, and R115G) cause ALS with a defined clinical phenotype, including characteristic age of onset, survival time and/or site of onset (lower limbs in D90A and H46R). Other mutations present a more varied course or have only been identified in individual patients/families, making a comprehensive analysis of the genotype-phenotype relation difficult. The frequency of SOD1 mutations in fALS patients varies between populations from 13 to 20% and in sALS patients from 1 to 2% [28][29][30] . Thus the mutation frequency in the present study (21.1% in fALS, 2.3% in sALS) is relatively high. The L144S and K3E variants were the most frequent among the Polish ALS patients (but not in other populations, except for L144S among Brazilian patients [31][32][33] . Contrary, D90A, the most frequent European SOD1 mutation, was only present in 0.4% of cases (4.1% of fALS). From the mutations with the highest number of representing individuals the L144S was associated with the earliest disease onset and the slowest progression, while the G41S was characterized by a particularly aggressive progression and a short survival, comparable with the phenotype reported for A4V in North American population 34 . Although the number of affected individuals were not sufficient for statistical analysis, based on the clinical observation, the individuals with G37R, N86S, homozygous D90A or G93C mutations presented with a relatively less severe, while A4V, G72S, and S105L with more severe phenotypes (based on time for reaching clinical end-points), similarly to previous reports from other populations 9,35-38 . The L126* mutation, previously described as aggressive, showed a highly variable survival in our study, ranging from 36 to 228 months 39 . As for the clinical phenotypes, the statistical analysis showed that N139D mutation was linked to the most homogenous clinical presentation of PMA, as opposed to the K3E characterized by the prevalence of classic ALS. From the clinical observations: beside the N139 D, the isolated lower motor neuron involvement (PMA) was observed in the G72S, N86S, G93C, S105L and L126*, while the classic phenotype prevailed among patients with the K3E, G41S, D90A, and L144S mutations, and was also shown in A4V, W32* and G37R represented by single individuals. The D90A homozygotes shared the classic phenotype with prevalent lower motor neuron involvement and lower limbs onset, whereas one of the patients heterozygous for D90A had a bulbar onset and a short survival of 25 months. We cannot exclude that the MSA-P in the patient with the D109Y mutation was an accompanying condition, as the mutation had previously been described in classic ALS with prevalent UMN involvement, bulbar onset and long survival 40 . In contrast to other studies we found a highly infrequent onset in the upper limbs among patients harboring SOD1 mutations. All the cds mutations identified in our study group were classed as disease-causing according to the Human SOD1 non synonymous SNP analysis (http:// bioin fogro up. com/ sod1/ snp/), which is consistent with our prioritization results 41 . However, according to the SNP analysis database none of the mutations was predicted to influence the aggregation tendency or amyloid propensity. Our protein modeling results (Table 3, Supplementary Figs. S1-S15) in which we compared WT and mutated SOD1 structures after the all-atom MD simulations of a SOD1 dimer in a water environment imitating mammalian cytosol conditions 42 showed that only two mutations, D109Y and C111Y, had an increased aggregation potential. However, these mutations were not considered aggregation-prone by SNP analysis, Aggrescan, TANGO or Aggrescan3d (Supplementary Table S5). This is concordant with previous observations that SOD1 in fALS has a reduced propensity to form aggregates, while soluble heterodimers and trimeric SOD1 complexes may be more toxic as compared to large aggregates 43,44 . A slightly higher aggregation potential was predicted for the A4V mutant. The movement of loops adjacent to residue after the A > V mutation is considerable ( Supplementary Fig. S3), which can facilitate aggregation. After the mutation, the valine side chain is directed towards the center of the b-barrel rather than exposed but it still could promote movements of adjacent fragments. Ours result appears to support the hypothesis that folding intermediates of SOD1 are an important source of cytotoxic conformations in ALS pathology 6 . Indeed, it was previously proposed that A4V mutation destabilizes SOD1 monomer and weaken the dimer interface 45 . Recently Brasil et al. observed low levels of SOD1 monomers in cells co-expressing WT and A4V SOD1, and the predominant formation of heteromeric species 46 . Based on this they suggested that WT SOD1 might exist primarily as unfolded monomeric intermediates and then fully active dimers. On the other hand, unfolded and misfolded monomers might be the predominant mutant SOD1 form. We found L126* to be the most damaging to the protein structure. The truncation of a substantial fragment of the polypeptide (loop 126-141 and β-thread 142-151) rearranges both the dimer interface and the active site. The reported differences between the results concerning SOD1 protein stability obtained with different methods (bioinformatics, molecular modeling, in vitro, in vivo) suggest the existence of as yet unidentified factors involved in the formation of pathogenic SOD1 conformations in vivo 43,47 . SOD1 gene variants undergoing alternative splicing have already been described in fALS patients 48,49 . In the present study potential for alternative splicing due to the cds mutations was disputable, as only some of programs used indicated a small probability of such an effect. Nevertheless, the effects of two of those mutations, W32* and S105L seems sufficiently likely to deserve an experimental verification. Potentially altered splicing predicted with HSF 3.0 software was www.nature.com/scientificreports/ significantly associated with bulbar involvemenhuman spt in sALS patients and respiratory insufficiency in familial and sporadic ALS patients. We observed that the least evolutionarily conserved positions in SOD1 (D90 and L144) were also the most frequently mutated in our study group, and the D90A and L144S carriers suffered from a slowly progressing ALS. Mutations of conserved residues of SOD1 were significantly associated with shorter survival times and shorter time between the disease onset and respiratory failure in ALS patients. Similarly, the PredictSNP (a consensus classifier for prediction of disease-related amino acid mutations) classified the K3E, D90A, D109Y, and L144S mutations as neutral and their carriers' symptoms were relatively less severe in terms of age of onset and survival times. The pathogenicity of at least some ALS-related SOD1 mutations seems to involve the formation of amyloidlike aggregates. However, Aggrescan classified WT SOD1 and nearly all its mutated versions studied here as of low aggregation propensity (highly negative Na4vSS scores); still, specific nucleation points from which an ordered fibrillary structure could spread under certain conditions would nevertheless make the mutated protein amyloidogenic. Notably, the K3E, L144S, and N139D variants were predicted to be even less aggregate-prone than the WT SOD1, while A4V was the only mutant with a markedly enhanced aggregation potential. Algorithms that have been derived from and used to predict amyloid fibril formation in the absence of other biological factors also offer a considerable degree of accuracy for predicting amyloid-aggregation propensity in vivo remains to be improve to extend this prediction to disease manifestation and pathology 50 . Statistical analyses indicate some ALS clinically relevant end-points (bulbar involvement, wheelchair-bound, respiratory insufficiency, survival time) with increase aggregation propensity but also neutral by TANGO software prediction. To sum up, while the prediction programs and molecular modeling could not define consistently the pathogenicity of each and every SOD1 mutation, there was a good agreement between those predictions and the disease severity for the both ends of the ALS spectrum, i.e., the least and the most severe cases. A caveat of our study is that the only experimentally-derived SOD1 structure available is a dimer of the mature molecule, whereas the prion-like ALS pathomechanism is most probably associated with an immature or misfolded protein 6 . For instance it was shown recently that both WT and mutant SOD1 form dimers and oligomers, but only mutant SOD1 aggregates and form intracellular inclusions. Moreover, co-expression of WT and mutant SOD1 in various cell models resulted in the formation of a larger number of inclusions, as compared to cells expressing WT or mutated SOD1 separately 51 . Taking into consideration the dysfunction of numerous cellular pathways observed in ALS, aggregation of SOD1 does not seem to be the only cause of ALS. According to the multistep hypothesis of ALS 52 , single SOD1 mutations may influence more than one step leading to ALS onset. The major effect of SOD1 mutations in ALS is linked to the protein aggregation and a prion-like propagation of misfolded molecules. These mutations may also lead to a loss of function of SOD1 by affecting its structure and/ or interactions pattern. The loss of function involves not only the dismutase enzymatic activity, e.g., associated with the N86S mutation 53 , but may also involve a loss of the nuclear function where SOD1 acts as a transcription factor 54 . In one sporadic ALS patient we identified a nonsense mutation at codon 32 (p.W32*), which was absent from the whole exome/genome databases (1000 GP, gnomAD) 55 . Since the W32* was also found in the patient's asymptomatic mother, age > 70, we were not able to prove the mutation was pathogenic. We further found that SOD1 W32* was associated with a dismutation activity in erythrocytes reduced by half 53 , which might point at the loss of SOD1 function 56,57 . The premature stop codon could result in the shortest reported truncated SOD1 protein, but most likely the nonsense-mediated mRNA decay prevents the synthesis of such abnormal protein 58 . A putative loss of SOD1 function in ALS was reported in previous studies. For instance, the mutation V30Dfs*8 59 should produce a very short, non-functional truncated SOD1 protein. G28_P29del caused by alternative splicing of exon 2 of SOD1 leads to reduced transcription and a low level of SOD1 protein in the mutation carriers 60 . A similar result was reported for mutation S108Lfs*15 61 , as the authors observed ca. 50% reduction of the SOD1 protein level and could not detect the truncated SOD1 (a protein with the predicted molecular weight) by Western blotting. The above mentioned SOD1 mutations, as well as other pathogenic variants including D90A, G41S, and I112M 62,63 , showed a reduced penetrance. Interestingly, a recent in vitro study has shown that the tryptophan residue at position 32 (W32) is necessary for the formation of a competent seed for aggregation allowing the prion-like propagation of SOD1 misfolding from cell to cell, and the W32S substitution blocked this phenomenon 64 . Also a study on SOD1 single copy/ knock-in models of ALS in C. elegans suggests an involvement of both the loss and gain of function of SOD1 in ALS development 65 . The contribution of the loss and gain of function mechanisms vary in different neuronal populations. In the studied model, a glutamatergic neuron degeneration was induced by oxidative stress due to the loss of SOD1 function, a phenomenon also observed in a significant fraction of ALS patients. Also recent reports on children with a homozygous truncation mutation (p.C112Wfs*11) with no SOD1 activity and severe symptoms during infancy suggest that the loss of SOD1 enzymatic activity contributes to motor neuron disorders 66,67 . To sum up, the SOD1 haploinsufficiency with all its consequences might be one of the factors in an oligogenic etiology of ALS. It is most likely that many cases of ALS are due to the presence of multiple gene variants with different pathogenicity. Understanding the input of such variants to the development of neurodegeneration and their interactions with diverse environmental factors (e.g., toxins or the microbiome) is critical for the development of efficient therapies, especially in regards to potential gene therapy 4,68 . Conclusions We found L144S and K3E to be the most frequent SOD1 mutations among Polish ALS patients. Carrying L144S mutation was linked to the longest, while G41S to the shortest overall survival. Despite intrafamilial heterogeneity, L144S was significantly associated with the least severe, K3E with medium and the G41S the most aggressive
2022-01-08T14:35:41.899Z
2022-01-07T00:00:00.000
{ "year": 2022, "sha1": "d3d4fbe24fb03a7c1632132a8dcf75dc8e67c2cb", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-021-03891-8.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5f928a51629fca57d047d56d3bc30cce6ff946cb", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
73682279
pes2o/s2orc
v3-fos-license
The Influence of Prenatal Exposure to Tobacco Smoke on Neonatal Body Proportions The objective of this study was to determine neonatal anthropometric indices such as: birth weight, crown-heel length, head and chest circumference and ponderal index, in relation to the maternal smoker status (active and passive smoking). The study included 147 neonates born in 2003-2004 at the Princess Anna Mazowiecka University Hospital in Warsaw admitted to the Neonatal and Intensive Care Department of Warsaw Medical University. Neonates were assigned to one of three groups: babies of mothers who were active smokers, passive smokers and non-smokers based on a questionnaire concerning exposure to tobacco smoke and on the concentration of cotinine in maternal urine. The babies of mothers who were active smokers were born with lower birth weight (p=0.033), lower crown-heel length (p=0.026), lower head circumference (p=0.002) and lower chest circumference (p=0.021) significantly more often than babies of non-smoker mothers. Babies whose mothers were active smokers had an increased risk of lower head circumference or 3, 9 (1, 4-10, 7, CI 95%), and an increased risk of lower chest circumference OR 4, 0 (1, 5-10, 9, CI 95%). The babies of mothers who were passive smokers also had lower anthropometric indices, but the differences were not statistically significant. No effect on ponderal index was observed among the neonates whose mothers were active and passive smokers. Smoking during pregnancy causes symmetrical restriction of intrauterine growth. *Corresponding author: Marzenna Król, Neonatal and Intensive Care Department, Medical University of Warsaw, ul. Karowa 2, 00-315 Warszawa, Poland, Tel: +4822-596 61 55; Fax: +48 22 59 66 484; E-mail: marzenna.krol@wp.pl Received August 30, 2012; Accepted October 18, 2012; Published October 23, 2012 Citation: Król M, Florek E, Piekoszewski W, Bokiniec R, Kornacka MK (2012) The Influence of Prenatal Exposure to Tobacco Smoke on Neonatal Body Proportions. J Women’s Health Care 1:117. doi:10.4172/2167-0420.1000117 Copyright: © 2012 Król M, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Introduction Tobacco smoke has a harmful influence on the development of the fetus not only when the mother is an active smoker, but also when a pregnant woman is exposed to tobacco smoke in the environment (passive smoking). Of over 4200 constituents of tobacco smoke, the most harmful to the fetus are: nicotine, carbon monoxide, nitrogen oxide, hydrogen cyanide, cadmium and reactive forms of oxygen. The direct influence of nicotine, changes in placental structure, formation of pathological hemoglobin (carboxyhemoglobin, methemoglobin, cyanmethemoglobin) result in persistent hypoxia of fetal tissue and a decreased supply of nutrients. Babies of mothers who are smokers have a birth weight of 154-459 g lower than those of non-smoker mothers and the deficit in birth weight increases proportionally to the number of cigarettes smoked by the mother [1][2][3][4][5][6][7][8]. Some authors confirm that the association between maternal cigarette smoking during pregnancy and reduced birth weight is modified by maternal genetic susceptibility. They suggest an interaction between metabolic genes and cigarette smoking [4,5]. Smoking during pregnancy impairs not only weight gain, but also growth of body length, head and chest circumference [1][2][3][4][5][6][7][8]. The objective of this study was to determine neonatal anthropometric indices such as: birth weight, crown-heel length, head and chest circumference and ponderal index, in relation to maternal smoker status (passive and active smoking). Material and Methods The study included 147 neonates born in 2003-2004 at the Princess Anna Mazowiecka University Hospital in Warsaw and admitted to the Neonatal and Intensive Care Department, Medical University of Warsaw. Live-born neonates from singleton pregnancies, whose mothers gave informed consent and completed a questionnaire which assessed the level of tobacco smoke exposure during pregnancy, were included in the study. The study protocol was approved by the Bioethical Committee of Warsaw Medical University. Investigations were conducted in accordance with the 1975 Helsinki Declaration. The subjects were divided into three groups based upon the questionnaire regarding exposure to tobacco smoke and on the concentration of cotinine in maternal urine. There were 58 subjects whose mothers were active smokers; the mothers declared themselves as such and their urine cotinine level was >200 ng/mg creatinine. Subjects whose mothers were passive smokers exposed to environmental tobacco smoke during pregnancy numbered 64 (cotinine level in maternal urine ranged from 5-200 ng/mg creatinine). The number of subjects whose mothers were non-smokers and had no environmental exposure was 25 (the maternal urinary cotinine level was <5 ng/mg creatinine). Some mothers underestimated the degree of tobacco smoke exposure during pregnancy and this was corrected by an assessment of cotinine levels in the urine. 5 neonates whose mothers declared passive smoking in the questionnaire and 7 neonates of women who declared no exposure to tobacco smoke were included in the group of neonates whose mothers were active smokers, because the level of cotinine in the mother′s urine was >200 ng/mg creatinine. A further 19 babies of women with urine cotinine levels in the 5-200 ng/mg creatinine range were included in the passive smoker group, although their mothers had denied exposure to tobacco smoke in the questionnaire. In the first 24 hours after delivery a 5ml urine sample was collected from the mother. The sample was frozen at a temperature -80°C and stored until cotinine levels were established. The level of cotinine was quantified by High Performance Liquid Chromatography (HPLC). The urinary concentration of cotinine was standardized by comparison with creatinine excretion and the results were expressed as the cotinine: creatinine ratio (ng/mg).The gestational age of newborns in the study was between 26 and 42 weeks and the average gestational age were 38 weeks. The majority of subjects -114 (77.6%) were delivered at term, the remaining 33 (22.4%) were premature. There was no statistically significant difference in gender, mode of delivery, number of preterm newborns between the 3 groups of subjects (Table 1). Directly after delivery anthropometric indices were recorded: birth weight (g), crown-heel length (cm) and head and chest circumference (cm). Birth weight was determined with electronic scales with up to 10 g accuracy. Crown-heel length, head and chest circumference were measured with tape measure with up to 0.5 cm accuracy. Head circumference was recorded above the orbita, chest at the level of the nipples. The ponderal index was defined as 100 x [birth weight (g)/crown -heel length (cm) 3 ]. The Statistical Analysis System (SAS) program package was used for statistical analysis. Descriptive statistics were calculated for each parameter. The Shapiro-Wilk test was used to verify the hypothesis concerning the correspondence of the distribution of quantitative variables with the normal distribution. The Kruskal-Wallis test was used to compare the groups for quantitative variables with a distribution different from normal. Differences between the two groups for quantitative variables with a distribution different from normal were tested using Wilcoxon test. Fisher′s exact test was used in the analysis of the relationships between qualitative variables. The associations between anthropometric indices and maternal smoker status and other confounding factors such as: gender, gestational age and complications of pregnancy were assessed using the multivariate logistic model (GLIMMIX). In order to make the analysis more credible, the variables were discretized into four levels in accordance with the quartiles and cumulative logit model for ordered response was used. The odds ratio was calculated. P-values less than 0.05 were considered statistically significant. Results The lowest birth weight was observed in subjects whose mothers were active smokers and the difference between the birth weight of subjects whose mothers did not smoke was statistically significant p = 0.033. The median birth weight in this group of subjects was 3175 g and it was 230 g lower than the birth weight of subjects whose mothers were passive smokers and 325 g lower than the birth weight of subjects whose mothers were non-smokers. Median birth weight in the group of subjects with passive tobacco exposure during pregnancy was 3405 g and it was 95 g lower than the birth weight of subjects with no tobacco exposure during pregnancy. However, the difference was not statistically significant ( Table 2). Median crown-heel length of the subjects with active tobacco exposure was 53 cm and it was 1 cm shorter than the median crown-heel length of the subjects with no tobacco exposure during pregnancy. The difference was statistically significant (p=0.026) ( Table 2). The lowest values for head circumference were noted in the group of subjects with active tobacco exposure during pregnancy. In this group the median head circumference was 33 cm and it was 1 cm lower than in the group of subjects with passive tobacco exposure (34 cm) and 2 cm lower than in those with no tobacco exposure during pregnancy (35 cm). The difference between head circumference of subjects with tobacco exposure and no tobacco exposure during pregnancy was statistically significant (p=0.002). The median head circumference of subjects with passive tobacco exposure was also 1 cm lower than no tobacco exposure during pregnancy, but the difference was not statistically significant (Table 2). Subjects whose mothers smoked cigarettes during pregnancy also had a lower chest circumference than those whose mothers were passive smokers and non-smokers. Median chest circumference in the group of subjects whose mothers were active smokers was 32 cm. The difference was statistically significant (p=0.021) in comparison with the group of subjects whose mothers were non-smokers ( Table 2). The median ponderal index in all three groups of subjects was similar and it was respectively: 2.11 (babies of active smokers), 2.09 (babies of passive smokers), 2.14 (babies of non-smokers) ( Table 2). Logistic regression models to estimate the association of maternal cigarette smoking with anthropometric indices in relation to other confounding factors such as: gender, gestational age and complications of pregnancy showed that active smoking by pregnant women has a statistically significant influence on the head and chest circumference of neonates. The babies with active tobacco exposure were at 3.9 (1.4-10.7, CI 95%) times greater risk of reaching lower head circumference, 4.0 (1.5-10.9, CI 95%) times greater risk of reaching lower chest circumference compared with those with no tobacco exposure during pregnancy. In the group of subjects whose mothers were active smokers throughout pregnancy there was also a tendency towards lower birth weight 2.6 (1.0-6.9, CI 95%) and lower crown-heel length 2.4 (0.9-6.6, CI 95%). The odds ratio value was close to statistical significance. In the case of neonates of mothers who were exposed to tobacco smoke during pregnancy (passive smokers) the risk of lower birth weight and crown-heel length as well as head and chest circumference was also higher, but this was not statistically significant (Table 3). Discussion Many authors report that smoking during pregnancy has a negative influence on growth of the fetus. They often use a questionnaire to assess maternal smoking status. But the smoking habit reported by mothers themselves is not an accurate measure of fetal tobacco exposure, particularly with regard to passive smoking [9]. In our study some mothers underestimated the degree of tobacco smoke exposure during pregnancy. Therefore the smoking status of the mothers was corrected by an assessment of cotinine levels in the mother′s urine. The results of our study were similar to others which confirmed that neonates with tobacco exposure during pregnancy have lower birth weight, crownheel length, head and chest circumference [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15]. In our study the difference in birth weight between neonates with and without tobacco exposure during pregnancy was 325g (p=0.033). Similarly, neonates whose mothers were passive smokers, hence less exposed to tobacco [2][3][4][5][6][7][8][9][10][11][12]. In our study neonates of mothers who were passive smokers achieved a 95 g lower birth weight and 1 cm lower head circumference than neonates of mothers who did not smoke, but the differences were not statistically significant. Multivariate logistic model assessing relationships between anthropometric indices and maternal cigarette smoking status, gender, gestational age and complications of pregnancy showed that in the group of neonates whose mothers were active smokers there was a higher risk of lower head circumference 3.9 (1.4-10.7, CI 95%) and lower chest circumference 4.0 (1.5-10.9, CI 95%). The odds ratios for lower birth weight 2.6 (1.0-6.9, CI 95%) and lower crown-heel length 2.4 (0.9-6.6, CI 95%) were close to statistical significance. As reported by Roquer et al. exposing a pregnant woman to cigarette smoke had a similar effect on the anthropometric parameters of neonates (birth weight, crown-heel length and head and chest circumference) as smoking<10 cigarettes a day. The study showed a reduction in body length in the babies of passive smoker mothers by 1 cm compared with those of mothers who did not smoke [13]. The difference in crownheel length between neonates with and without tobacco exposure during pregnancy ranges, according to different authors, from 0.79-2.3 cm [2][3][4][5][6][7][8]. In our study this difference was 1cm (p=0.026). Lindley et al. showed that reduction in crown-heel length is associated with the number of cigarettes smoked by the mother. Crown-heel length deficit in babies whose mothers smoked less than 10 cigarettes a day during pregnancy, was lower in comparison with those whose mother did not smoke (0.62 cm) and those whose mothers smoked over 10 cigarettes a day (crown-heel length deficit was 0.89 cm). A statistically significant deficit in crown-heel length was also observed in babies whose mothers had stopped smoking from the 32 week of pregnancy. This fact provides evidence for the negative and irreversible influence, already in early pregnancy, of tobacco smoke on body length increase [6]. Jaddoe et al. registered the weekly lower increase in femoral bone length by 0.19 mm (-0.23-0.14, CI 95%), statistically more often, in fetuses of mothers who smoked ≥ 9 cigarettes/day. Lower fetal growth parameters were registered in the sonogram from the 10 th week of gestation. The same authors also showed a significantly lower increase in head circumference by 0.56 mm (-0.73-0.40, CI 95%) from the 25 th week of gestation [7]. Ponderal index calculated to establish neonatal body proportions reflects the relationship between body weight and length is independent of race or gender. It increases with gestational age, because with maturation the weight of the body increases more than its length. In children with growth retardation, a normal ponderal index will indicate a proportional reduction in weight and length (symmetric growth retardation), whereas a low ponderal index may Active smoker vs. non-smoker 2.6 (1.0-6.9) 2.4 (0.9-6.6 *p-value < 0.05-*p-value < 0.05-significant differences suggest that weight is more affected than length (asymmetric growth retardation). In our study comparable values of ponderal index in both groups: babies with active tobacco exposure (2.11) and of without tobacco exposure during pregnancy (2.14) prove that smoking tobacco by the mother inhibits the increase of both the weight and the length of the body. Neonates with tobacco exposure during pregnancy tend to be symmetrical in their growth retardation. Other authors also observed no significant effect of smoking in the prenatal period on the ponderal index. Values of the ponderal index were comparable between the group of neonates whose mothers smoked and those whose mothers did not smoke during pregnancy [8,14]. Lindley et al. proved that continued smoking throughout pregnancy was associated with an increase in ponderal index. Infants of smokers tend to be shorter and have a higher ponderal index, while the infants of non-smokers tend to be longer and have a lower ponderal index. Infants of smokers who stopped smoking also had a statistically significant increase in ponderal index of 0.027 (95% CI, 0.009, 0.045) compared with the infants of nonsmokers of the same birth weight and gestational age [6]. The main reason of fetal growth disorders in mother with tobacco exposure during pregnancy is decreased supply of nutrients and of oxygen during fetal life. The factors accounting for this condition are: morphological changes in the placenta of smoker mothers and limitation of blood flow in the intervillous space of the placenta associated with the direct vasoconstrictor effect of nicotine [1,15]. Furthermore, nicotine inhibiting the active transport of amino acids in the placental microvilli decreases protein synthesis by the fetus and due to decreased serum concentrations of the fetal growth hormones (insulin, IGF-I and its binding protein IGFBP-3) [1,16]. Nicotine penetrates directly into the fetal circulation. The presence in placental tissue of cytochrome CYP2A6, whose enzymes participate in biotransformation of nicotine into cotinine, was not confirmed. Therefore, the human placenta does not pose a metabolic barrier to nicotine transfer to the developing fetus [1]. The extent of birth weight deficit in neonates with mothers who smoked during pregnancy is associated with the genetically determined ability to biotransform nicotine. Wang et al. showed the difference in body weight reduction in neonates with active tobacco exposure during pregnancy mothers depending on the arrangement of alleles in the genes coding for enzymes participating in nicotine biotransformation. In those cases when the arrangement of alleles in maternal genotype for the gene CYP1A1 was AA, reduction of neonatal birth weight of smoker vs. non-smoker mothers was 252 g. However, if the arrangement of alleles for this gene was Aa or aa, the birth weight of neonates with mothers who were active smokers was 520 g lower. The presence of gene GSTT1 caused the decrease of birth weight by 285 g, while its absence caused the reduction of birth weight by 642 g. The greatest birth weight reduction took place with the allele's arrangement Aa/aa for gene CYP1A1 and simultaneous absence of gene GSTT1 [4]. Restricted fetal growth is therefore the result not only of adverse influence of tobacco smoke, but also of the negative interaction between metabolic genes and cigarette smoking. 1. Neonates with mothers who actively smoked during pregnancy have statistically significantly lower birth weight, crown-heel length, head and chest circumference than neonates with mothers who did not smoke and were not exposed to tobacco smoke during pregnancy. 2. Ponderal index determining body proportions is comparable in the case of neonates whose mothers were active smokers and those who were non-smokers during pregnancy.
2019-03-12T13:05:02.324Z
2012-10-23T00:00:00.000
{ "year": 2012, "sha1": "0039cd7b2a13742d531dea83924b2e770369342f", "oa_license": "CCBY", "oa_url": "https://www.omicsonline.org/open-access/the-influence-of-prenatal-exposure-to-tobacco-smoke-on-neonatal-bodyproportions-2167-0420.1000117.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "effe913f7734084c4c9ef08f1d00f083332f2bdb", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
9705606
pes2o/s2orc
v3-fos-license
Predictors for Increasing Eligibility Level among Home Help Service Users in the Japanese Long-Term Care Insurance System Objectives. This cross-sectional study described the prevalence of possible risk factors for increasing eligibility level of long-term care insurance in home help service users who were certified as support level 1-2 or care level 1-2 in Japan. Methods. Data were collected from October 2011 to November 2011. Variables included eligibility level, grip strength, calf circumference (CC), functional limitations, body mass index, memory impairment, depression, social support, and nutrition status. Results. A total of 417 subjects (109 males and 308 females, mean age 83 years) were examined. There were 109 subjects with memory impairment. When divided by cut-off values, care level 2 was found to have higher prevalence of low grip strength, low CC, and depression. Conclusions. Some potentially modifiable factors such as muscle strength could be the risk factors for increasing eligibility level. Introduction The rising cost of long-term care (LTC) for elderly has become a growing concern. According to a forecast released by the Organization for Economic Cooperation and Development (OECD), more than twice the cost of LTC will be needed in 2050 [1]. Indeed, Japan has been experiencing rapid growing of aging population which the world has never experienced, and the cost of LTC insurance system doubled in only 10 years since the system started [2]. The Japanese LTC insurance system consists of 7 eligibility levels, including 2 support levels and 5 care levels. Among support levels, the purpose to utilize the service is basically prevention of increasing to care level. In contrast, care level 1 or higher need some help for activities of daily living, and care level 3 or higher need totally care for ambulation or clothing. In addition, the cost of care service for care level 3 or higher accounts for two-thirds of total LTC cost, although the number of people is less than 40%. For this reason, prevention of increasing to care level 3 or higher is important. To prevent increasing eligibility level with efficient intervention, it is important to investigate the risk factors that enable to find out the high-risk subjects. However, previous longitudinal studies investigated unmodifiable factors such as gender, economic status, and caregivers as independent factors [3,4]. Modifiable factors such as muscle strength or mass, nutritional status, and depression have been identified as risk factors for disability, hospitalization, or death in healthy community-dwelling elderly [5][6][7][8][9][10][11][12][13][14]. We hypothesized that these modifiable factors could be applied to risk factors for increasing eligibility level among those who were certified eligibility level. The purpose of this study was therefore to reveal the prevalence of potential risk factors for increasing eligibility level from the cross-sectional observational study among those who were certified care level 2 or lower. Design. The sample for this study was drawn from community-dwelling elderly who utilized home help services provided by Consumers' Cooperatives (Co-op) Aichi and Kanagawa in Japan. The recruit and survey were conducted in October and November, 2011. The study protocol was approved by the ethics review board of the Nagoya University Graduate School of Medicine (approval no. 1274). Written informed consent was obtained from each of the subjects prior to enrollment in the study. All data were organized and centralized at the data center of the Yamada laboratory at the Nagoya University Graduate School of Medicine. Subjects. The subjects were drawn from individuals aged ≥65. Inclusion in the study required a certified LTC support level of 1-2 or care level of 1-2. Subjects who were difficult to communicate with were excluded. The sample size was calculated from the track record of home help service users in Co-op Kanagawa from March 2007 to March 2010. The incidence rates of increasing to care level 3 or higher in 3 years were 4.3% in support level 1, 4.1% in support level 2, and 7.2% in care level 1. In contrast, care level 2 had 27.7% of remarkable high rate. Therefore, we assumed care level 1 or lower to be one group and planned to recruit the same numbers of subjects from care level 1 or lower and care level 2. Then, we assumed that the hazard ratio of increasing to care level 3 or higher of those who had a risk factor to those who did not was 2.0. We considered the lowest quartile of each parameter such as grip strength or calf circumference (CC) as a cut-off value for determining a risk factor. At least 359 subjects are required for each arm to provide 80% power to detect a hazard ratio of 0.5 using a 2-sided significance level of 0.05 by chi-square test. After considering maximum 20% for the dropouts or unsuitable for analysis, we have expected that approximately 449 subjects are needed to be enrolled in this study. Demographic Characteristics. The data of eligibility level, service utilization, medical history, and family configuration were provided by care managers, who visit those who utilize LTC services once a month and manage their care plans. Eligibility Level. Initial certification of eligibility levels in LTC insurance system is conducted when individuals or their families send in the application to the municipal government. Officials assess the applicant's physical and mental health, and the time required for nursing care is calculated. Then, the LTC certification committee rates the eligibility level based on this assessment. For example, if the required time for nursing care is less than 32 minutes, the applicant is to be support level 1. And if the time is more than 110 minutes, the applicant is to be care level 5. People in support levels 1-2 are able to independently perform basic activities of daily living, such as ambulation or clothing, and they are considered to be needing some support to prevent increasing eligibility level by physical or mental impairments. People in care levels need assists for performing basic activities of daily living. Typically, people in care level 1-2 are able to walk independently, while people in care level 3-5 have difficulty in walking alone. As a general rule, eligibility levels need to be updated within 12 months. Measures. Measurements were performed in each subject's home. Body weight was measured using a digital weight scale (BC-301-SV; Tanita Co, Tokyo, Japan). As an alternative indicator of height, demispan was measured in the supine position. Demispan was defined as the distance from the clavicular notch of the sternum to the opposite fingertip while the arm is laterally outstretched [15]. In measuring the height of the elderly, demispan resolves problems such as the errors caused by kyphosis or difficulty in standing. A previous study measured demispan in the seated position [15]. However, we measured demispan in the supine position and height in the standing position on a separate group of subjects aged ≥50 years without kyphosis (75 males and 68 females). The estimated formula was = 1.38 + 45.3 (cm) ( = 0.77, < 0.01) for males and = 1.28 + 49.6 (cm) ( = 0.70, < 0.01) for females. Calf circumference is an index of screening the risk for disability [9]. CC was measured at the point of greatest circumference while the subject was supine with the knee flexed at 90 ∘ . Measurements were made for both legs and the greater value (in mm) was used as the index for CC. Handgrip strength was measured with the Jamar dynamometer set at the second handle position. The participant sat with the wrist in a neutral position and the elbow flexed at 90 ∘ [16]. The grip strength of each hand was measured twice, and the highest value (in 0.1 kg) was used as the index of handgrip strength. Prior to administering the survey of home help service users, the principal investigator provided and handled training sessions for the above-described measurements to care managers or persons in charge of providing services in the Co-op Aichi and Co-op Kanagawa. One hundred and twentyfive examiners, including 56 from Co-op Aichi and 69 from Co-op Kanagawa, practiced measuring CC and MAC 10 times on a single subject to master the method of measurement. Then, the examiners measured CC and MAC twice on 5 subjects on different days, and each subject was measured by more than 2 examiners. From the data, intraclass correlation coefficients (ICC) for each examiner and the averages of the CC and MAC obtained from each subject were calculated. The examiners repeated the measurements if one of the following conditions were met: the ICC was <0.9; the value of the measurement was more than 1 cm above or below the average. The ICC of each examiner ranged from 0.90 to 1.00 for both the CC and MAC measurements. The averages of ICC were 0.97 ± 0.04 for CC and 0.96 ± 0.05 for MAC. Memory impairment screening (MIS) [17] was used to confirm the credibility of the assessment with questionnaire. This test required subjects to recall 4 words and the score ranged from 0 to 8, with lower scores indicating poorer memory. The 5 item geriatric depression scale (GDS5) was used to assess depressive symptoms [18]. The original version was translated into Japanese and established the reliability and validity [19]. It is scored from 0 to 5, with higher scores indicating more severe depression. Functional limitation was assessed using the performance measure for activities of daily living 8 (PMADL-8) [20]. The PMADL-8 is composed of a list of 8 performance items potentially requiring daily physical activity in chronic heart failure patients and determines the extent to which patients currently experience difficulties in performing daily physical activity as evaluated by a four-category response scale. It is scored from 8 to 32, with higher scores indicating more severe functional limitations. The participation scale was used to assess participation restriction [21]. The participation scale is composed of a list of 5 activity items, such as going out for a hobby or conversing/ exchanging e-mails with family/friends. Response options for each question ranged from 1 to 4, and responses were determined by the frequency of the activities. It is scored from 5 to 20, with lower scores indicating more severe participation restriction. We also assessed subjects' social support, which is received by interacting with others. Zimet et al. designed the original social support scale [22], and Iwasa et al. modified it to create a Japanese version, which consisted of 7 items [23]. We preliminarily surveyed healthy, community-dwelling elderly using the Japanese version of the questionnaire, and pared the original list of 7 items down to the following 3 items. (1) There is a special person who is around when I am in need. (2) My family really tries to help me. (3) I have friends with whom I can share my joys and sorrows. These three items were chosen because of the strong correlations between these 3 items and the other 4 items. We also confirmed that there is a strong correlation of the total score of these 3 items with the total of the 7 items score (unpublished data). The response options for each question ranged from 1 to 7, and responses were determined by the quality of the support. It is scored from 3 to 21, with higher scores indicating better support. The mininutritional assessment (MNA) was used to assess nutritional status [24,25]. MNA is composed of eighteen items, such as content of meals, BMI, internal use and midarm circumference (MAC). It is scored from 0 to 30, with lower scores indicating poorer nutritional condition. For analysis of questionnaires, we excluded subjects with MIS scores ≤4, which indicated mild cognitive impairment level. All data were analyzed using SPSS for Windows (version 16.0; SPSS, Tokyo, Japan). A value < 0.05 was regarded as statistically significant. Study Population. A total of 417 subjects were surveyed (109 males and 308 females). Subject characteristics are presented in Table 1. We had planned to recruit the same numbers of subjects from care level 1 or lower and care level 2. However, that was difficult because the number of those who were difficult to communicate with or refused to cooperate with the survey was more than expected in care level 2. As a result, the number of subjects in care level 1 or lower doubled that in care level 2. The average age was 83 years old in each group. There were 83 males (29.9%) in care level 1 or lower and 26 males (18.7%) in care level 2. More than half of our subjects lived alone, and the ratio was higher in care level 1 or lower. One hundred and nine subjects (26.1% of the total) had MIS scores ≤4 and were excluded from the questionnaire analysis. Prevalence of Risk Factors. The comparison of the prevalence of risk factors between care level 2 and care level 1 or lower were shown in Table 2. Care level 2 had higher prevalence of low grip strength and depression (GDS5 score ≥2) than care level 1 or lower. In contrast, there were no differences in the prevalence of CC <31 cm, BMI <18.5 kg/m 2 , and MNA ≤16 between the two groups. Table 3. Grip strength, CC, GDS5, and social support were correlated moderately or weakly with PMADL-8 and the participation scale, respectively. There were moderate correlations between MNA and BMI and between MNA and CC. Social support was negatively correlated with GDS5. Discussion The present study is the first study to compare the prevalence of possible objective risk parameters as well as questionnaires in those who were certified care level 2 or lower in Japanese LTC insurance system. The results of this study support our hypothesis that subjects in care level 2 had poorer grip strength, CC, and GDS5 values than those in care level 1 or lower. Among our subjects, the prevalence of CC <31 cm, the cut-off value for predicting disability in female, was as high as 35%. Furthermore, grip strength was similar to those who were certified LTC eligibility level in previous studies [26]. Therefore, from the view point of muscle strength or mass, our subjects were frail and might be a representative sample of those who were certified same eligibility levels in Japanese LTC insurance system. The prevalence of malnutrition, indicated by MNA scores ≤16, was approximately 10%. This prevalence was 10 times as high as that in nondisabled elderly [27]. Similarly, 22.5% of the subjects in our study were underweight, with BMI values <18.5 kg/m 2 . This prevalence was two times greater than that in the independent community-dwelling elderly [28]. Low MNA score alone was reported as a risk factor for disability or death [12,29,30]. In addition, malnutrition and underweight may lead to sarcopenia, which is one of the primary causes of the onset of disability among the frail elderly population [31][32][33]. Thus, we confirmed that our subjects were at high risk for disability. However, there were no differences in the prevalence of MNA ≤16 and BMI <18.5 kg/m 2 between care level 2 and care level 1 or lower. This was because nutritional status was not considered for certifying eligibility level in Japanese LTC insurance system [34]. Thus, malnutrition should be given more attention to prevent increasing eligibility level because the prevalence of malnutrition was high even in low eligibility levels. In addition to MNA and BMI, progressive weight loss should be taken into account because unintentional weight loss (5% in a half year or more) is one of the initial signs used as a screening for sarcopenia [31]. The future results of this study will reveal the relationship between malnutrition, being underweight or weight loss, and increasing eligibility level. Depressive symptoms were related to eligibility level, activity limitations, and participation restrictions in the present study. In previous studies, depressive symptoms predicted disability, hospitalization, and death [35][36][37]. In contrast, social support were negatively correlated to depressive symptom, therefore, social support may be a countermeasure against the adverse impact of depressive symptoms on increasing eligibility level. This is because social support was shown to buffer the effects of negative life events or disease on mental health [38]. The prevalence of memory impairment was as high as 26%. This result highlighted the difficulty of using questionnaires to assess those who were certified eligibility level in LTC insurance system. The questionnaires for assessing depressive symptoms, nutritional status, and activity levels are useful tools to screen the risk for disability among community-dwelling elderly. However, memory impairment limits the credibility of the results obtained from these questionnaires. This result highlights the importance of objective parameters, such as grip strength, CC, or BMI, as the screening measure for increasing eligibility level in those who were certified LTC eligibility level. We must describe potential limitations of the findings of the present study. Since this report was a cross-sectional study, it could not examine cause-effect relationships. Our ongoing prospective study will reveal risk factors for increasing eligibility level among Japanese home help service users. Another limitation was small sample of male subjects. Because of the gender differences in muscle strength or body composition, the analysis might be better to be carried out on the gender basis. Further study will need to examine the gender differences in the possible risk factors. Conclusions The findings of this study suggest that muscle strength, muscle mass, functional limitation, and depressive symptoms could be applied to stratify the high-risk subjects for increasing eligibility level. Our ongoing prospective cohort study will reveal the objective and subjective risk factors as well as their cut-off values for increasing eligibility level.
2016-05-04T20:20:58.661Z
2013-09-09T00:00:00.000
{ "year": 2013, "sha1": "70d945a5633afbefc00331b426663484d7550151", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/bmri/2013/374130.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "680bb115cd617266bfee5d65c104e4f004ac34e1", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
259023570
pes2o/s2orc
v3-fos-license
Dysmenorrhea and psychological wellbeing among females with attention deficit hyperactivity disorder Although rarely examined together, ADHD, emotional regulation (ER), and dysmenorrhea may be associated, which could create additive burdens on psychological well-being (PWB). Clinicians working with ADHD populations may need to take these challenges into consideration to maximize treatment outcomes. This study investigated the relationships among ADHD, dysmenorrhea, ER, and PWB within a sample of 266 adult females with a self-reported ADHD diagnosis. ADHD symptom severity was positively correlated with dysmenorrhea severity, but ER skills were not a significant moderator of this relationship. ADHD symptom severity was negatively correlated with PWB; however, this relationship was not moderated by dysmenorrhea severity nor ER ability. Overall, a positive association between ADHD symptom severity and dysmenorrhea severity was found in our sample. Further research is needed to understand the nature of this association, as well as factors that may contribute to PWB among individuals with these comorbid conditions. level of impairment among adults with ADHD (Hirsch et al., 2019).Furthermore, this population experiences elevated rates of depression, anxiety, and suicidal ideation (Fuller-Thomson et al., 2016).Given the cumulative impact of the stressors females with ADHD endure, there is reason to believe that the condition negatively impacts the psychological wellbeing (PWB) of this population. Although frequently co-occurring, ADHD and pain conditions are rarely recognized and managed simultaneously (Kerekes et al., 2021).Deficits in executive functioning negatively influence pain management and health-related quality of life; therefore, clinicians may need to consider potential interactions between pain and ADHD to maximize treatment outcomes (Caes et al., 2022).Consideration of pain is important among females with ADHD as somatic symptoms may obscure proper psychiatric diagnosis, ER difficulties may amplify pain experiences, and pain can negatively impact an individual's wellbeing (Raja et al., 2020;Vaughan et al., 2019).Understanding the relationship between ADHD and dysmenorrhea (i.e.menstrual pain) is important as dysmenorrhea has disruptive effects on attention and may exacerbate ADHD symptoms. Dysmenorrhea refers to dull, throbbing, cramp-like pain typically emanating from the lower abdomen just before and/or during menstruation (Grandi et al., 2012).Primary dysmenorrhea occurs when menstrual pain is not caused by pelvic pathology; conversely, secondary dysmenorrhea (SD) occurs when menstrual pain is caused by underlying pathology (e.g.endometriosis; McKenna and Fogleman, 2021).Dysmenorrhea effects 50-90% of females of reproductive age, with SD accounting for 10% of these cases (McKenna and Fogleman, 2021).The negative impacts of dysmenorrhea are widespread and include impaired physical, social, and occupational functioning; reduced sleep quality; and lower quality of life during menstruation (Baker et al., 1999;Barnard et al., 2003;Iacovides et al., 2014).Depression and anxiety commonly coexist with dysmenorrhea; therefore, the pain and problems females face due to dysmenorrhea are frequently exacerbated by psychological factors (Bajalan et al., 2019). The impact of dysmenorrhea is seldom considered among females with ADHD, despite evidence of pain hypersensitivity within ADHD populations.For example, researchers have found individuals with ADHD may be at heightened risk for experiencing chronic pain conditions (e.g.Kasahara et al., 2021), and to have lower pain thresholds than individuals without ADHD during pain induction tasks (Treister et al., 2015).Within the general population, adults with higher levels of ADHD symptoms are three times more likely to report extreme levels of pain than individuals with fewer ADHD symptoms (Stickley et al., 2016).Furthermore, prospective research indicates three-quarters of females with neurodevelopmental disorders (i.e.ADHD and autism) develop a chronic pain disorder and are five times more likely to experience widespread pain than neurotypical females (Asztély et al., 2019).Dopamine represents a potential link between pain and ADHD such that decreased dopamine levels are associated with pain sensitization and dopamine dysregulation is implicated in the etiology of ADHD; thus, females with ADHD may be more vulnerable to dysmenorrhea (Kerekes et al., 2021). To date, only one other study has examined the association between dysmenorrhea and ADHD (Kabukçu et al., 2021).The results revealed that adolescents with dysmenorrhea symptoms affecting their daily life reported significantly more ADHD symptoms; furthermore, as the severity of their pain increased so did the severity of their ADHD symptoms (Kabukçu et al., 2021).Further research is needed to verify the relationship between ADHD and dysmenorrhea as well as examine the ways in which these conditions may relate to an individual's psychological health.For instance, dysmenorrhea and ADHD commonly co-exist with anxiety and/or depression (Bajalan et al., 2019;Fuller-Thomson et al., 2016).Anxiety and depression symptoms should be considered when assessing the relationship between ADHD and dysmenorrhea to ensure it cannot be attributed to co-occurring psychological disorders alone.Moreover, consideration of variables that may mutually affect dysmenorrhea and ADHD, such as ER, may lead to areas for treatment and management. Both pain and ADHD are associated with ER.Pain is an aversive sensory and emotional experience; consequently, ER is a key element in pain management (Raja et al., 2020).Emotion dysregulation and persistent pain reinforce each other (Márki et al., 2017); conversely, adaptive ER strategies are associated with reduced pain intensity and better mood states before, during, and after painful experiences (Connelly et al., 2007;Ruiz-Aranda et al., 2010).Females with ADHD frequently struggle with ER, which is associated with reduced positive effect, elevated negative affect, higher ADHD symptomology, and increased comorbidity among adults with ADHD (Hirsch et al., 2019).Females with ADHD who struggle with ER may therefore experience heightened menstrual pain, leading to greater physical impairment and poorer PWB (Raja et al., 2020). The purpose of this study is to examine the relationships among ADHD, dysmenorrhea, and PWB while considering the potential ways ER may impact both pain and PWB.Based on prior research we hypothesized that (1) ADHD symptom severity would be positively correlated with dysmenorrhea severity; (2) the relationship between ADHD symptom severity and dysmenorrhea severity would be moderated by ER skills, such that poorer ER would be associated with more severe dysmenorrhea; (3) ADHD symptom severity would be associated with dysmenorrhea severity over and above the influence of anxiety and depression; (4) ADHD symptom severity would be negatively correlated with PWB; and (5) the relationship between ADHD symptom severity and PWB would be moderated by dysmenorrhea severity and ER skills, with higher levels of dysmenorrhea and poorer ER skills being associated with larger reductions in PWB. Figure 1 depicts the associations tested in this study. Participants Following receipt of Research Ethics Board approval, participants were recruited using online advertisements on Reddit.Individuals were eligible to participate if they were an adult female (i.e. at least 18 years-old), had an ADHD diagnosis, experienced regular menstrual periods (i.e. at least three periods in the last 6 months), and lived in the United States or Canada.Within 1 month of active recruitment, 435 individuals responded to the survey.Participants who responded to less than 80% of the survey were considered to have withdrawn from the study; consequently, 139 responses were removed.Twenty-eight responses were removed for failing to meet the eligibility criteria.The final sample included 266 individuals.A power calculation was conducted using G*Power (Faul et al., 2007) for each planned analysis.The analysis requiring the most participants (i.e.multiple regression with five predictors; 1-beta = 0.80 and α = 0.05) was used to determine the required sample size and indicated that a minimum sample size of 92 participants was needed to detect medium-size effects. Measures Personal History Questionnaire.The Personal History Questionnaire was developed for the purposes of this investigation and involved three sections.Section one queried demographic information.Section two pertained to ADHD history, including age at diagnosis, use of medication and/or non-pharmacological ADHD interventions, and family history of ADHD.Section three queried menstrual history, including level of pain typically experienced during menstruation, presence of risk factors for dysmenorrhea (e.g.early onset of puberty), and use of medication to manage menstrual pain (e.g.birth control). Adult ADHD Self-Report Scale.The Adult ADHD Self-Report Scale (ASRS) is an 18-item selfreport scale of ADHD symptom severity (Kessler et al., 2005).It assesses the frequency of ADHD symptoms on a 5-point Likert scale (0 = "never"; 4 = "always"), with higher scores indicating higher levels of ADHD symptoms.The ASRS has high scale score reliability (Cronbach's α = 0.93) and concurrent validity with clinician-administered measures (r = 0.72, p < 0.001) among community and clinic-based samples of adults with ADHD (Adler et al., 2012).The scale score reliability of the ASRS was acceptable (Cronbach's α = 0.78) in the present sample. Symptom Severity Scale.Dysmenorrhea severity was assessed using the Symptom Severity Scale (SSS; Chesney and Tasto, 1975).The SSS assesses the severity of menstrual symptoms and degree of pain and discomfort an individual experienced during their last menstrual period.The SSS consists of 15 items rated on a 5-point Likert scale (0 = "symptom not present"; 5 = "very severely").Higher total scores are indicative of increased symptom severity.The SSS had strong construct validity and scale score reliability (Cronbach's α = 0.93) in a previous sample of Canadian females (Gagnon and Elgendy, 2020).The SSS had good scale score reliability in the current sample (Cronbach's α = 0.89). PROMIS short forms.Anxiety and depression symptoms were assessed using the PROMIS Anxiety Short Form 7a (PSF-A) and The PROMIS Depression Short Form 8b (PSF-D) respectively.Both measures are scored on a 5-point Likert-scale (1 = "Never"; 5 = "Always").The PSF-A consists of 7-items assessing the presence of anxiety symptoms (e.g."I felt worried") over the past 7 days.Higher total scores reflect higher anxiety levels.The PSF-A has high scale score reliability (Cronbach's α = 0.87) and scores on this measure correlate with other previously validated anxiety assessment tools (Marrie et al., 2018).The PSF-D is an eight-item scale that assesses depression symptoms (e.g."I felt sad") over the past 7 days.Higher total scores reflect increased depression symptoms.The PSF-D has excellent scale score reliability (Cronbach's α = 0.95) and strong construct and criterion validity (Marrie Difficulties in Emotional Regulation Scale.ER abilities were assessed using the Difficulties in Emotional Regulation Scale (DERS; Gratz and Roemer, 2004).The DERS consists of 36-items on a 5-point Likert scale (1 = "almost never"; 5 = "almost always") to measure an individual's emotional regulation deficits.Item scores are summed, and higher scores indicate greater difficulty in ER.The DERS has good construct validity, scale score reliability (Cronbach's α = 0.89), and test-retest reliability over a period of 8 weeks (Gratz and Roemer, 2004).The DERS has demonstrated strong internal consistency when used among ADHD samples (Ben-Dor Cohen et al., 2021).The DERS had excellent scale score reliability within the current sample (Cronbach's α = 0.94). Procedure Interested participants followed the link provided on the online advertisement to the study.The study questionnaires were hosted on SurveyMonkey.Participants were required to review consent information and provide informed consent prior to gaining access to the study.Participants were then asked to complete the study measures, which were presented in the same order across participants.Eligibility criteria were confirmed in the first section of the questionnaire.If participants failed to meet the eligibility criteria, participants were unable to complete the rest of the questionnaire and were thanked for their interest but were informed that they were ineligible to participate.After completing the questionnaire, participants were debriefed using a written summary with further information about the purpose of the research. Data preparation All statistical analyses were conducted using the Statistical Package for the Social Science (SPSS) 28 (IBM Corp, 2021).Missing data were addressed using person mean replacements to maximize data utilization (Downey and King, 1998).Nineteen participants required item replacements on one or more items; item mean replacements were only completed if no more than 20% of a participant's items were missing within a measure.Preliminary analyses were conducted to calculate means, standard deviations, ranges, and correlations of the study variables and are presented in Table 1.Skew and kurtosis were assessed for each variable and all study variables approximated the normal distribution. The relationship between ADHD symptom severity and dysmenorrhea severity (hypothesis 1) and between ADHD symptom severity and PWB (hypothesis 4) were examined using Pearson's product moment correlations.The moderating role of ER skills in the relationship between ADHD symptom severity and dysmenorrhea (hypothesis 2) was examined with a moderation analysis conducted via PROCESS, a macro for SPSS in which the outcome variable was SSS scores, the predictor variable was ASRS scores, and the moderator was DERS scores (Hayes, 2022).A hierarchical multiple regression was conducted to test the hypothesis that ADHD symptom severity would be associated with dysmenorrhea severity over and above the influence of anxiety and depression (hypothesis 3).SSS scores were significantly correlated with age, r = −0.14, p = 0.02 and education, r = −0.22,p < 0.001; therefore, age and education were included in the first step of the regression to control for these variables.PSF-A and PSF-D scores were entered in the second step of the regression, ASRS scores were entered in step three, and PWBS scores were entered as the outcome variable.Lastly, a moderation analysis was conducted to test the hypothesis that the association between ASRS scores and PWBS scores would be moderated by DERS scores and SSS scores using Model 2 of the PROCESS macro (Hayes, 2022).The outcome variable was participant scores on the PWBS.The predictor variable was ASRS scores.The moderators were DERS scores and SSS scores. Demographic, ADHD, and dysmenorrhea characteristics Demographic characteristics are summarized in Table 2.The mean age of participants was 30.94 (SD = 6.59, range 18-51).Most participants (89.1%) identified as women.Most of the sample (71.1%) lived in the United States.The sample was predominately white (85%) and had a relatively high level of educational attainment. On average, the participants reported receiving their ADHD diagnosis at the age of 26 (SD = 8.43, range = 4-50).Most participants (81.2%) took ADHD medication.Some participants (38%) used non-medical treatment for their symptoms (e.g.ADHD coaching or counseling).Most participants (61.7%) reported a family history of ADHD. Participants' mean age of menarche was 12 years-old (SD = 1.55, range = 7-18).Ninetyfive percent of participants reported experiencing pain regularly during menstruation.Specifically, 0.4% never experienced pain, 4.5% rarely experienced pain, 16.9% sometimes experienced pain, 39.1% usually experienced pain, and 39.1% always experienced pain during menstruation.Twelve percent reported having secondary dysmenorrhea, 44% were on birth control, 18.8% had given birth, and 83.1% regularly used medication to manage their menstrual pain.On a scale from 0 to 10, participants reported their highest level of menstrual pain at 5.6, average level of menstrual pain at 4.0, and lowest level of menstrual pain at 2.0 on average.The model summary for the regression analysis conducted to test hypothesis 3 is in Table 3.In step 1, age and education level accounted for 5% of the variance in SSS scores, ∆F (2, 257) = 7.65, p < 0.001.The addition of PSF-A scores and PSF-D scores in Step 2 led to an R 2 increase of 0.14, ∆F (2, 255) = 21.83,p < 0.001.The addition of ASRS scores in Step 3 led to an R 2 increase of 0.10, ∆F (1, 254) = 34.84,p < 0.001.The full model accounted for 27.7% of the variation in SSS scores and was significant, F (5, 254) = 20.89,p < 0.001.In the final step education level, β = 0.16, p = 0.003; PSF-A scores, β = 0.16, p = 0.03; PSF-D scores, β = 0.15, p = 0.03; and ASRS scores, β = 0.33, p < 0.001 contributed significantly to the model. In the moderation analysis conducted to test hypothesis 5, there was a significant main effect of ASRS scores on PWBS scores, b = −1.78,t ( 260 relationship between ASRS scores and PWBS scores was moderated by DERS scores.The interaction between ASRS scores and DERS scores accounted for a small but significant amount of variance in PWBS scores, R 2 Change = 0.01, p < 0.05.The overall model was significant, F (5, 260) = 29.24,p < 0.001, R 2 = 0.36. Discussion The purpose of this study was to examine the interrelations of ADHD, dysmenorrhea, ER skills and PWB.Understanding associations among these variables is imperative as clinicians working with ADHD populations may need to take challenges with dysmenorrhea, ER, and PWB into consideration throughout treatment.Key incremental advances of our study include assessing the relationship between ADHD symptom severity and dysmenorrhea severity within a sample of adults previously diagnosed with ADHD while considering the potential influence of anxiety and depressive symptoms, evaluating the PWB of this population, and examining the influence of ER in these relationships.Overall, the results suggest the hardships faced by females with ADHD go beyond high levels of the core symptoms of their disorder, such as emotional dysregulation and dysmenorrhea.This is the first investigation, to our knowledge, to demonstrate that females with ADHD report high levels of menstrual pain.A 95% comorbidity rate was reported in the current sample.ADHD symptom severity was positively correlated with dysmenorrhea severity, as hypothesized and found by others (Kabukçu et al., 2021).Furthermore, ADHD symptom severity was associated with dysmenorrhea severity over and above the influence of anxiety and depression.Taken together, these results suggest there is a unique relationship between ADHD symptom severity and dysmenorrhea severity.Clinicians working with populations with ADHD or dysmenorrhea should therefore consider the potential effects of both conditions across clinical settings due to the possibility of symptoms of one condition influencing symptoms of the other (Gagnon et al., 2022).The relationship between ADHD symptom severity and dysmenorrhea severity may be explained through dopamine dysregulation such that decreased dopamine levels are associated with both pain sensitization and ADHD (Kabukçu et al., 2021;Kerekes et al., 2021).Conversely, the association between dysmenorrhea severity and ADHD symptom severity may occur due to the disruptive effect menstrual pain has on attention (Keogh et al., 2014).PWB and ADHD symptom severity were negatively correlated as hypothesized.Possible explanations for this association include high levels of core symptoms, comorbid psychological distress, and late age of ADHD detection that prevented most participants from accessing resources required to adaptively manage their ADHD until adulthood (Vildalen et al., 2019).The average age of ADHD diagnosis within the present sample was 26 years-old, which supports the notion that females with ADHD frequently receive their diagnoses at later ages than males (Brown et al., 1991;Grevet et al., 2006).Remaining undiagnosed until adulthood is associated with negative outcomes for females with ADHD, such as affective symptoms (Rucklidge and Kaplan, 1997).ADHD symptom severity was most strongly negatively correlated with the Environmental Mastery and Self-Acceptance subscales of the PWBS.This aligns with previous research findings that women with ADHD have poor self-esteem, struggle with disorganization, respond to life stressors with emotion, and feel they have a lack of control over their situations (Quinn and Madhoo, 2014).Similarly, ADHD symptom severity was negatively correlated with the Positive Relations with Others subscale, reflecting the impairment females with ADHD typically face within interpersonal relationships (Young et al., 2020).Contrary to the hypothesis, the relationship between ADHD symptom severity and PWB was not moderated by dysmenorrhea severity.Since menstrual pain is cyclical, it is possible that dysmenorrhea may not affect constructs such as PWB that are thought to be relatively stable over time. Participants in the present study displayed high levels of emotional dysregulation, indicated by average DERS scores approximately one standard deviation above the normative reference value (Giromini et al., 2017).Difficulties in ER moderated the relationship between ADHD symptom severity and PWB which aligns with the researchers' hypothesis and previous findings that poor ER is associated with dysfunctional coping strategies and life dissatisfaction among females with ADHD (Young et al., 2020).However, the moderating effect of ER on the relationship between PWB and ADHD symptom severity is unlikely to be clinically meaningful due to the small effect size.The relationship between ADHD symptom severity and dysmenorrhea severity was not moderated by difficulties in ER.This finding was unexpected as adaptive ER has been associated with reduced pain intensity (Connelly et al., 2007;Ruiz-Aranda et al., 2010).ER may not influence pain severity among individuals with ADHD as they may exhibit a predisposition to pain that cannot be remediated by emotion-focused pain coping strategies.Nevertheless, a focus on ER skills may improve functioning of those affected by ADHD and dysmenorrhea separately. The findings from the current research must be viewed in light of the study's limitations.The research was correlational in nature, thus the causes behind the relationships discovered cannot be inferred.The narrow scope utilized within the present study may be considered a limitation.While measuring dopamine dysregulation went beyond the scope of the study, doing so may have provided more clarity on the relationship between ADHD symptom severity and dysmenorrhea severity.Similarly, including additional variables known to have a negative impact on females with ADHD may have increased the model's ability to explain factors contributing to the negative association between ADHD symptom severity and PWB.We utilized a convenience sample that lacked ethnic and gender diversity which limits the generalizability of the results.The sample may have further been influenced by selection bias such that that individuals who are more severely impacted by ADHD and dysmenorrhea may have been more willing to participate, especially when considering that participants were recruited from online women's ADHD communities that often serve a supportive function, no incentive for participation was provided, and discourse on mental health and menstruation is sensitive in nature. Despite these limitations, we can offer several areas for future direction.Future research should adopt a longitudinal approach to better understand the temporal order of the relationship between ADHD symptom severity and dysmenorrhea severity.Such research should compare participants' ADHD symptoms during menstruation and non-painful points in their menstrual cycle to determine the potential impact of dysmenorrhea on ADHD symptoms.Further research on ADHD, dysmenorrhea, and PWB should recruit a non-affected comparison group to better understand the unique experiences of females with ADHD.Future research should consider implementing a strength-based lens while assessing ER among women with ADHD as this may identify areas of relative strength that may be utilized to promote overall functioning within this population.Finally, our research suggests there is a relationship between ADHD symptom severity and dysmenorrhea severity; furthermore, the PWB and ER skills of this population were lower than what might be expected.Consequently, treatment approaches which may improve these comorbid experiences, such as mindfulness-based interventions, should be considered (Gu et al., 2018;Mitchell et al., 2017;Singleton et al., 2014). In sum, we examined the relationships among ADHD, dysmenorrhea, ER, and PWB.The results indicated there was a positive association between dysmenorrhea severity and ADHD symptom severity.This relationship was not moderated by ER ability, nor could it be explained by symptoms of depression and anxiety alone.Thus, the present study is the first to illustrate there is a relatively robust relationship between ADHD symptom severity and dysmenorrhea severity.The results further revealed that PWB was negatively associated with ADHD symptom severity; however, neither dysmenorrhea severity nor ER abilities moderated this relationship to a clinically meaningful extent.Future research is required to understand the causes behind these relationships.Nevertheless, the present study suggests there is a need to consider dysmenorrhea within treatment for females with ADHD. Figure 1 . Figure 1.Proposed theoretical relationships among the variables of interest within the present study.Solid lines depict previously reported relationships; dotted lines depict previously unknown and untested relationships. Table 1 . Descriptive statistics and correlations between study variables. Table 2 . Summary of participant demographic characteristics. Table 3 . Hierarchical regression analysis for variables predicting PD severity.
2023-06-03T06:17:49.193Z
2023-06-02T00:00:00.000
{ "year": 2023, "sha1": "9890808848d88122bf99bbbfa6a255e01c34cbfb", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/13591053231177254", "oa_status": "HYBRID", "pdf_src": "Sage", "pdf_hash": "124cdaa5564415ba56305f548945e68c36991ef9", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
204785704
pes2o/s2orc
v3-fos-license
Comparative analysis of the quantitative parameter method and elasticity color mode method for real-time shear wave elastography in the diagnosis of benign and malignant solid breast lesions Objective: To examine the performance of real-time shear wave elastography (RT-SWE) in routine clinical practice. Methods: This was a prospective study of 500 patients. The elasticity color mode method was judged by a four-mode system. The quantitative parameter method was used to measure the modulus of elasticity of the lesions. Pathologic reports were used as a gold standard to comparatively analyze the diagnostic performance of the two methods. Results: A total of 553 tumors were detected. The average mode value and the modulus of elasticity (Emax) of the benign breast masses was lower than that of malignant masses (p < 0.05). With Emax = 67.4 as the diagnostic threshold value, the sensitivity, specificity, accuracy, negative predictive value, and positive predictive value of the two methods were not statistically significant different (p > 0.05). Conclusions: The shear wave quantitative parameter method and the elasticity color mode method showed similar performances in the diagnosis of benign and malignant breast masses. The elasticity color mode method is convenient and intuitive, whereas the quantitative parameter method can be used to objectively assess the lesions when it is difficult to score the elasticity of an image, but could not be relied on alone. Introduction Ultrasound elastography is a new technique that has been developed rapidly in recent years and has attracted much attention. It can reveal the histologic characteristics of a lesion by detecting the elasticity of the tissue. The hardness of tissue detected by real-time shear wave elastography (RT-SWE) is expressed using the modulus of elasticity (Young modulus [E]), which can simultaneously obtain in real time a series of elasticity orientation indicators, such as the maximum (E max ), mean (E mean ), minimum (E min ), and standard deviation (E SD ), of the tissue, making RT-SWE a true elasticity quantification technique. [1][2][3][4][5][6] The main methods for evaluating RT-SWE are the quantitative parameter method and the elasticity color mode method, both of which show good diagnostic performance, 7-14 but produce slightly different results when different parameters and samples are used. [15][16][17][18] In clinical practice, due to the complexity and variety of internal components of the masses and the subjective factors of different physicians, Comparative analysis of the quantitative parameter method and elasticity color mode method for real-time shear wave elastography in the diagnosis of benign and malignant solid breast lesions the elasticity color modes based on the acquired images of the sum or subtraction of different colors may not be the same. A comparison of the diagnostic efficacy of the two methods has not yet been reported. We compared breast mass analysis using the elasticity quantitative parameter method and the elasticity color mode method and aimed to provide a basis for improving the accuracy of breast cancer diagnosis. Patients This prospective study was approved by the ethics committee of our hospital and was implemented in accordance with ethical principles with informed consent signed by the participating patients. The demographic information and imaging examination of the patients were managed by specific individuals and the physicians of the ultrasound examination were blinded to them. This study included 500 patients from January 2015 to May 2018. There were a total of 553 cases of solid breast lesions, including 357 benign lesions in 306 patients and 196 malignant lesions in 194 patients. The following inclusion criteria were used: female patients aged 18-80 years; underwent both 2D breast ultrasonography and RT-SWE; and received ultrasound-guided biopsy (16-G disposable biopsy needle; Bard) or surgical treatment and obtained pathologic results. Exclusion criteria were as follows: pregnancy or lactation; neoadjuvant chemotherapy and radiotherapy or chemotherapy or a history of breast augmentation or prosthesis implantation; lesions immediately adjacent to scar tissue; or refusal to participate in the study. Ultrasonography An Aix Plore full digital color Doppler ultrasound diagnostic apparatus (Supersonic Imagine) was used and a high-frequency probe with a frequency of 4-15 MHz was selected. Conventional scanning. The focus was on the location, size, shape, border, and internal and posterior echoes of the mass and then was switched to elastography mode, using a dual-image mode. The elastography examination was conducted: after applying enough coupling agent, the probe was placed and stabilized on the body surface as gently as possible, trying to avoid applying pressure. No abnormal color area due to pressure on the probe should have appeared on the monitor for the tissue below the probe in contact with the skin unless the lesion was very close to the skin. During the image acquisition, the patient was asked to hold the breath, and the images of each lesion were collected three times in succession. After obtaining a stable image, it was frozen, captured, and saved. Region of interest selection. Region of interest (ROI) was set as a circle and followed four conditions. (1) For the rule for the lesion shape, the ROI was set inside the lesion, covering the entire lesion with the least surrounding tissue and including the area with the greatest hardness on the elasticity color image. (2) If focal or annular plaque-like colored areas appeared at the border or periphery of the lesion, the rule of "larger ROI" was applied to include some or all of the "focal or annular colored areas" but the least possible surrounding normal tissue. (3) When measuring E ratio , the reference ROI with a diameter of 2-5 mm was selected in the "normal tissue area" at the same depth of the lesion showing no abnormality on the conventional ultrasound and a uniform dark blue color on the elasticity color image. (4) If the lesion was large or irregular and there was no normal tissue area on the elasticity color image, the normal tissue area at another position can be selected to measure the modulus of elasticity, and then the E/ B ration of the lesion manually calculated. Repetitive study. Thirty lesions (14 benign, 16 malignant) included in the study were randomly selected, and the examinations of conventional ultrasonography and SWE on these lesions were carried out by two independent sonographers. The consistency of the elasticity scores by the two sonographers was evaluated using the kappa value. The consistency of the Young modulus for the same patient by the two sonographers was evaluated using the intraclass correlation coefficient to determine the repeatability of the measured modulus of elasticity of the SWE. A difference with p < 0.05 was considered statistically significant. Diagnostic criteria The elasticity color images were analyzed based on the elasticity color model proposed by Tozaki et al. 6 and supplemented by Shi et al. 19 The images were classified as follows: mode 1: showing uniform blue areas inside and around the lesion; mode 2: showing vertical abnormal color stripes inside and at the border of the lesion; mode 3: showing a focal or ring-shaped abnormal color change in areas around the lesion, or if abnormal color stripes in mode 2 appeared in this mode, it was classified as mode 3; mode 4: showing uneven abnormal colored areas inside the lesion. If the color code inside the lesion was not shown in the center of the area, it was classified by the surrounding color and according to the above rules. Modes 1 and 2 were considered benign lesions, and modes 3 and 4 indicated a malignant lesion. The diagnostic thresholds of breast lesions for a series of quantitative parameters of SWE were determined using the receiver operating characteristic (ROC) curve. To evaluate the repeatability of the elasticity quantitative parameters and elasticity color modes of SWE, two sonographers randomly selected 30 breast lesions from the included patients to measure the modulus of elasticity and assess the elasticity color mode. Image analysis All 2D and elasticity images were read by two independent ultrasound diagnosticians with 7-20 years of experience and more than 6 months of SWE diagnostic experience. They did not participate in the ultrasound image acquisition and were blinded to the clinical data and pathologic results of the patients. First, each ultrasound diagnostician analyzed the 2D conventional ultrasound images and used the Breast Imaging Reporting and Data System (BI-RADS) classification criteria for diagnosis. Second, the two diagnosticians independently analyzed and evaluated the elasticity color characteristics according to the elasticity color model by Tozaki. 6 Finally, the malignancy of the mass was judged based on the cutoff values of the modulus of elasticity. Statistical analysis The statistical analysis was performed using SPSS 18.0 statistical software (IBM), and the measurement data were represented as the mean ± standard deviation (x̅ ± s). The differences in the measurement data between the two groups were compared using an independent sample t test, while the differences in the measurement data among multiple groups were compared using a one-way analysis of variance. The area under the curve (AUC) was calculated by the ROC curve and the z test. The diagnostic performance of each method was evaluated, and the optimal diagnostic critical point was determined. The sensitivity, specificity, accuracy, positive predictive value, and negative predictive value of the two methods were compared using the McNemar test. A difference with p < 0.05 was considered statistically significant. 29 cases were diagnosed as nonspecific invasive carcinoma (ductal carcinoma n = 15, lobular carcinoma n = 1), carcinoma in situ (n = 4), medullary carcinoma (n = 3), metaplastic carcinoma (n = 1), mucous carcinoma (n = 2), diffuse large B-cell lymphoma (n = 2), and apocrine carcinoma (n = 1). A total of 63 of 357 benign lesions were false-positive, including fibroadenoma (n = 29, 7 associated with calcification), intraductal papilloma (n = 4), adenosis with or without fibroadenoma (n = 15), sclerosing adenosis (n = 5), lobular neoplasms (n = 3, borderline tumor n = 2, benign n = 1), fibrocystic hyperplasia (n = 4), and others (n = 3). Analysis of the role of conventional 2D ultrasonography and the elasticity color model in the combined diagnosis of malignant breast lesions Among the 196 malignant lesions, 139 (70.9%) were clearly diagnosed by conventional 2D ultrasonography and the elasticity color model. Among the cases clearly diagnosed by conventional 2D ultrasonography, 163 cases (83.1%) were diagnosed correctly, and 33 patients (16.9%) were misdiagnosed. Among the cases clearly diagnosed by the elasticity color model, 167 patients (85.2%) were diagnosed correctly, and 29 patients (14.8%) were misdiagnosed as false negatives (Figure 1). Results of elasticity quantitative parameters and elasticity color model The correlation coefficients of E max , E mean , E min , and E SD were 0.89, 0.78, 0.80, and 0.74, respectively. The consistency of the elasticity color modes was moderate (Kappa Figure 2. The benign breast lesions were evaluated with the elasticity color model of mode 1 and mode 2 in 328 cases out of 357, and the malignant breast lesions were evaluated with the elasticity color model of mode 3 and mode 4 in 168 cases out of 196. The difference between the two groups was statistically significant (p < 0.001) ( Figure 3, Table 2). Comparison of the diagnostic performance of the SWE elasticity quantitative parameter method and elasticity color mode method Because pathologic diagnosis was always used as the gold standard, we compare the results of 2D and 4-mode with the pathologic diagnosis. The AUCs corresponding to E max , E mean , E min , E/B ration , and E SD and the elasticity color modes for evaluation were 0.95, 0.89, 0.51, 0.91, 0.93, and 0.93, respectively (Figure 4). E min showed an area under the ROC curve of 0.512 (p > 0.05). The evaluation based on E max , E SD , and the elasticity color mode showed good diagnostic performance, and the diagnostic performance of E max was the best (Table 3) in multiple breast lesions. When the maximum sensitivity and specificity of E max , E mean , E/B ration , and E SD were obtained, the corresponding optimal diagnostic thresholds were 67.4, 25.75, 2.75, and 6.54, respectively. The corresponding sensitivity, specificity, and Youden index were 0.86, 0.95, and 0.82 for E max ; 0.87, 0.73, and 0.60 for E mean ; 0.87, 0.81, and 0.68 for E/ B ration ; and 0.81, 0.92, and 0.73 for E SD . The elasticity quantitative parameter method and elasticity color mode method were applied for the diagnosis of different breast lesions (Table 4). There were no significant differences in the sensitivity, specificity, accuracy, positive predictive value, or negative predictive value between the two methods (p > 0.05; p values were 0.586, 0.394, 0.296, 0.417, and 0.579, respectively). Discussion The histologic characteristics of a lesion determine its echogenicity and hardness. Elastography can identify benign and malignant lesions by assessing the hardness information of the tissue. Because benign lesions have no cellular atypia, the interstitial cells and the cells inside the tumor are relatively uniform, with no invasiveness to the surrounding tissues. Thus, the modulus of elasticity parameters are more stable than those of breast cancer, and the modulus of elasticity values should thus be lower than those of breast cancer. 20,21 This study showed that the indicators of the modulus of elasticity method and the elasticity color mode method were significantly different in benign and malignant breast lesions (p < 0.05), and 54 cases of four degrees of BI-RADS lesions with unclear boundaries and irregular shapes were evaluated by the modulus of elasticity method as being benign lesions, including mastitis, sclerosing adenosis, and hyperplasia. Therefore, ultrasound elasticity can help to identify the four degrees of BI-RADS lesions that are difficult to diagnose. With the increase in the elasticity color mode levels, the modulus of elasticity E max and E SD also gradually increased, and the differences between the different elasticity color modes were statistically significant (p < 0.05). Our study showed that E max was the most reproducible, which is related to the maximum coverage of the lesion by ROI and the surrounding abnormal area. The hardest part of the lesion was included, so the size of the ROI has little impact on the E max , a finding similar to those in the related studies. 6,7 In this study, E max was highly positively correlated with the elasticity color mode (r = 0.817). Of the 196 malignant lesions, 101 cases were classified as mode 3 or showed a stiff rim sign, and the Young modulus E max was mostly more than 100 kPa. Therefore, the stiff rim sign in the elasticity color mode method is an effective and clear sign of malignant lesions, reflecting the invasiveness of the lesion to the surrounding tissues. 14,22,23 E SD is a modulus of elasticity parameter that reflects the degree of internal dispersion and variation in the lesion and corresponds to the arrangement of different pathologic components in the lesion. The larger the E SD value, the more uneven the modulus of elasticity distribution in different regions of the lesion and the more significant the heterogeneity. Therefore, the E SD value can translate the subjective characteristic of the tissue uniformity into a quantifiable indicator. 1 In this study, 55 lesions were classified as mode 4, reflecting the heterogeneity of malignant lesions, and it was confirmed that E SD was highly positively correlated with the elasticity color mode (r = 0.736), a finding consistent with that of Gweon. 24 Therefore, the elasticity color mode method provides a good diagnostic index that can reflect the hardness of the tissue and the distribution of the tissue hardness to a certain extent. This method has similar diagnostic performance to that of the shear wave quantitative parameter method in the diagnosis of benign and malignant breast masses. These two methods are consistent and complementary. The elasticity color images of the breast are complex, are diverse, and overlap quantitative method. When a malignant lesion is complicated by necrosis, liquefaction, or hemorrhage, its hardness can be reduced, resulting in a false-negative diagnosis 25 ; on the other hand, when calcification occurs or the fibrosis degree increases in a benign lesion, the hardness of the lesion may be increased, which may lead to the diagnosis of false-positive. 26 Because RT-SWE is a quantitative method, the results can be more objective. 7,27 Especially when it is difficult to score the elasticity image, the quantitative parameter method can be used. Among the 357 benign lesions, most of the false-positive results were for sclerosing adenosis, borderline phyllodes tumors, calcified tumors, and atypical hyperplasia, probably due to the increased hardness of the lesions caused by the large number of hard stromal cells. 28 Among the 196 malignant lesions, most of the false-negative results were early breast cancer and special types of breast cancer, such as medullary carcinoma, mucinous carcinoma, and lymphoma. This finding is because the essential components of the medullary carcinoma are mainly composed of cells, most of which are accompanied by necrosis and hemorrhage, and the tumor cells of mucinous carcinoma can secrete mucus, resulting in relatively soft lesions, especially for pure mucinous carcinoma. Relying on ultrasound elasticity alone fails to correctly diagnose the above false-positive or -negative lesions, and a comprehensive analysis and diagnosis should be carried out based on 2D ultrasonography, assisted by ultrasound elasticity. In this study, a combination of 2D ultrasonography and ultrasound elasticity was applied. Among the 196 cases of malignant lesions, misdiagnosis in 14 out of the 33 cases of false-negative lesions could be avoided. Among the 357 cases of benign lesions, misdiagnosis in 28 out of 63 cases of false-positives could be avoided. The combination of these two methods can reduce false-positives and false-negatives and improve the accuracy of diagnosis, which is consistent with the results in the previous studies in the literature. 7,29 This study has some limitations. First, RT-SWE is mainly based on conventional 2D ultrasound imaging. Although a double-blind method was applied, and the physicians reading the images did not participate in the image acquisition process, it is difficult to completely avoid the influence of 2D basic information on the elasticity imaging. 30 Second, the ROI was set as a circle by default, but most lesions have different shapes. Some lesions might not be completely covered, such as lymphoma, or too much normal tissue in the immediate vicinity of the lesion might have been included. Although the effect on E max is minor, there is a certain degree of impact on the moduli of the other elasticity parameters and the diagnostic thresholds. Third, the size and depth of the lesion as well as the thickness of the breast have a certain effect on the results of elastography, 31 which were not considered here. In addition, this study is a single-center prospective study, and the results may be limited by the selected samples. Conclusion RT-SWE has good clinical value in the differential diagnosis of benign and malignant breast lesions. The shear wave quantitative parameter method and the elasticity color mode method have similar performance in the diagnosis of benign and malignant breast masses. The two methods are consistent and complementary. In clinical practice, the elasticity color mode method is convenient and intuitive, but the quantitative parameter method can be used to objectively judge the lesion when it is difficult to score the elasticity image. The combination of these two methods can reduce false-positives and false-negatives in the diagnosis of breast lesions and improve diagnostic efficacy. Declaration of conflicting interest The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
2021-10-16T06:16:36.835Z
2021-10-15T00:00:00.000
{ "year": 2021, "sha1": "35dd475a5cc53ad48ffd4bcf20fb72bc0d277f10", "oa_license": "CCBY", "oa_url": "https://biomedres.us/pdfs/BJSTR.MS.ID.006227.pdf", "oa_status": "GREEN", "pdf_src": "Sage", "pdf_hash": "975aa42ae9200f1bbdec2bdd2304bacc58d22239", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
80736370
pes2o/s2orc
v3-fos-license
Current Status of Antiretroviral Therapy (ART) for Acquired Immunodeficiency Syndrome(AIDS) in Bangladesh The first case of an unknown syndrome was repor-ted in USA in 1981, characterized by a profound drop in CD4+ T lymphocyte counts and subsequent immune depression of patients. In those days, the disease was called “Gay pest”, “Gay cancer” or Gay-related im-mune deficiency (GRID), due to its major incidence among men having sex with men (MSM)2,3.Further demonstration that heterosexual patients were equally susceptible to infection led to its official de-finition as Acquired Immunodeficiency Syndrome (AIDS)4. The etiological agent of AIDS was first identified in 1983, by the French virologist Luc Montagnier and theyidentified retroviral particles and reversetranscriptase activity in cultures of lym-phocytes isolated from AIDS patients. This was the first report associating a retrovirus with AIDS, but not conclusive on their causal relationship. Less than a year later, the group led by Robert C. Gallo at the National Cancer Institute provided solid evidences in four reports, supporting the hypothesis of a new retro-virus as the causal agent of AIDS [5][6][7][8][9] .The corner stone in Gallo-s work was to replicate the new virus in a tumor cell line of lymphoid origin (H9), provi-ding enough viral material to characterize its proteins and to develop serologic diagnosis methods to detect antibodies specific for the virus in patient's sera 10 . Consequently, the nucleotide sequences of two diffe-rent but similar viruses were elucidated, markedly different from any previously identified human retro-virus 11,12 . This was the basis for denominating the new entity as the Human Immunodeficiency Virus (HIV) 12 . Burden in Bangladesh: In world,total no of people living with HIV is 36.7million,20.9 million people is receiving antiretroviral therapy,new infected HIV case is 1.8million,deaths due to AIDS is 1.0 million. 13 In Bangladesh the first HIV case was detected in 1989.HIV prevalence remains less than 0.01% among general population.Estimated people living with HIV is 11,700.In 2017,total number of new cases were 856.Reported cases for Rohinga crises from 25 th August 2017 till now is 168.In 2015,treatment was given among 4665 people living with HIV and in the last year total no of ARV receiver was 2642. 13 antiretroviral agent, zidovudine (AZT), a nucleoside reverse transcriptase inhibitor (NRTI), was shown to have a positive impact on clinical progression and death 14 The challenges of early NRTI regimens included high pill burdens, inconvenient dosing, treatment-limiting toxicities and incomplete virological suppression. Sequential monotherapy and incomplete virological suppression resulted in the emergence of multiple resistance mutations, with long-term treatment consequences.In treatment of Human immunodeficiency virus (HIV) protease inhibitors (PIs) and non-nucleoside reverse transcriptase inhibitors (NNRTIs), introduced in the mid-1990s, revolutionized the management of HIV infection. Highly active antiretroviral therapy (HAART) regimens, consisting of two NRTIs plus a PI or NNRTI, were capable of virological suppression (<400 copies ml"1), and widespread uptake quickly led to dramatic reductions in morbidity and mortality in the developed world 15 . HAART provides effective treatment options for treatment-naive and treatmentexperienced patients. Common antiretroviral drugs: Drugs are classified into following: Reverse Transcriptase Inhibitors Reverse transcriptase inhibitors are a group of drugs, which can bind and inhibit the reverse transcriptase enzyme to intercept the multiplication of HIV. There are two types of inhibitors: Non-Nucleoside Reverse Transcriptase Inhibitors (NNRTIs) 16 and Nucleoside Reverse Transcriptase Inhibitors (NRTI) 17 . Examples of this group of drugs include Zidovudine, Didanosine, Abacavir, Tenofovir, and Combivir. Protease Inhibitor: Regulation of HIV protease is of high importance for the correct assembly and production ofHIV. Protease inhibitors effectively block the functioning of protease enzymes in acutely andchronically HIV-infected CD4 cells. Inhibition of HIV protease enzymes results in the liberationof immature and noninfectious viral particles 18 . Examples of this group of drugs includelopinavir/ritonavir, Indinavir, Ritonavir, Nelfinavir, and Amprenavir. Fusion Inhibitors : This class of drugs acts by blocking HIV from entering the CD4 cells of infected patients. Theyinhibit the fusion of HIV particles with the CD4 cells 19 . Enfuvirtide is an example of a fusion inhibitor used in HIV treatment. Chemokine Receptor 5 Antagonist This group of drugs prevents the infection by blocking the Chemokine Receptor 5 (CCR5)antagonist receptor present on CD4 cells. In the absence of vacant CCR5 receptors, HIV fails to gain entry and infect the cell 20 . Maraviroc is an example of a CCR5 antagonist used in HIV treatment. Integrase Strand Transfer Inhibitors Strand transfer inhibitors prevent the integration of viral DNA into the host genome of CD4 cells by an integrase enzyme. Blocking integrase prevents HIV from replicating 21 . Raltegravir, Elvitegravir, and Dolutegravir are some medications in this category. ART should be initiated in all individuals with HIV regardless of WHO clinical stage or CD4 count.Adherence to treatment is of paramount importance in order to achieve the full efficacy of treatment and also to prevent the incidence of drug resistance 23 . The strategy of using two NRTIs plus a potent third agent still forms the cornerstone of current treatment principles, and is now referred to as combination antiretroviral therapy. According to WHO Guideline there are first, second, and third line treatment options 22 . First-line ART Adults: First-line ART treatment for adults consists of two NRTIs and one NNRTI. Tenofovirdisoproxilfumarate (TDF) + lamivudine (3TC) or Emtricitabine (FTC) + Efavirenz (EFV) as a fixed dose is the favored choice for this type of ART. When this drug combination is contraindicated or is unavailable, Pediatric patients: Patients below three years of age should be given Lopinavir/Ritonavir(LPV/r)-based treatment, even under NNRTI exposure. When LPV/r is not a viable option, NVP based treatment should be used. For infected children who are over age three, EFV is the ideal NNRTI while NVP has been identified as the second option. For infected children younger than three years of age, who develop TB while on the Lopinavir/Ritonavir (LPV/r)-based treatment, the NRTI regimen should be switched to Abacavir (ABC) + 3TC or AZT + 3TC until the TB infection is cleared. NRTI regimens similar to that of adults (TDF + 3TC (or FTC)) or (AZT +3TC) or (ABC + 3TC) are preferred for patients between 10 and 19 years of age who weight 35 kg or more. Second-line ART Adults: including pregnant and breastfeeding patients: When a first-line treatment of ART fails,a second-line ART should be utilized. The second-line ART is comprised primarily of two NRTIs and a ritonavirboosted PI. The recommended option for second-line ART includes AZT and 3TC as the NRTI. After the failure of AZT or stavudine (d4T) + 3TC-based firstline regimen,TDF + 3TC (or FTC) as the NRTI should be considered. When first-line NNRTI-based treatment fails, two NRTIs + a boosted PI are suggested. Pediatric patients: For children below three years of age, first-line ART is continued even whenit fails. No change in treatment is recommended; instead, adequate steps should be taken to improve adherence to the ART regimen. If first-line ART fails in children ages three and up, a second-line treatment consisting of one NNRTI and two NRTIs should be given. If ABC or TDF +3TC (or FTC) fails, the recommended option is AZT + 3TC. After a failure of AZT or d4T + 3TC(or FTC) in first-line treatment, the preferred NRTI option is ABC or TDF + 3TC (or FTC). Third-line ART If first-and second-line ART fails, the WHO recommends inclusion of new medicines with the least amount of risk for development of cross-resistance towards previously used drugs (e.g. integrase inhibitors and second-generation NNRTIs and PIs). Factors to consider when selecting ART The major factors that deserve thorough consideration while choosing an ART for a patient include the viral load and CD4 cell count before the treatment, the result of HIV genotypic drug resistance test, HLA-B*5701 status, patient preferences, and anticipated adherence. Comorbid conditions to screen prior to ART include cardiovascular disease, hyperlipidemia, renal disease,osteoporosis, psychiatric illness, neurologic disease, drug abuse or dependency requiring narcotic replacement therapy, pregnancy, coinfections with hepatitis C (HCV), hepatitis B (HBV), and tuberculosis (TB) 24 . Contraindications should also be considered: 1. Creatinine clearance is less than 50 ml per minute: Tenofovir. 3. Patients who are pregnant or who are trying to conceive: Efavirenz. CD4 count monitoring for therapeutic response Monitoring patients' viral load is critical to identify ART response. When the viral load analysis is not practical via polymerase chain reaction (PCR), branched chained DNA (bDNA), and nucleic acid sequence-based amplification (NASBA), the CD4 count is used as an indicator of HIV treatment response. During the first year of treatment, increases in CD4 count from 50 to 150 cells/mm3 with an increased response in the first trimester are considered as a positive response. CD4 count rises steadily ranging from 50 to 100 cells/mm3 per year until equilibrium is reached in the subsequent years (normal range: 500 cells/mm3 to 1200 cells/mm3) 25 . Periodic monitoring of CD4 count is required during and even after the patient achieves normal CD4 count under ART. A number of treatment independent factors like age, viral load, genetic make-up, lifestyle, quality of health care, etc., negatively influence the CD4 counts and HIV disease progression. Under such circumstances, a change in ART medication might be required. In Bangladesh 2 pharmaceutical (Beximco and Square) company producing 6 different types of drugs(abacavir + lamivudine + zidovudine, efavirenz, lamivudine + zidovudine, lamivudine + zidovudine + nevirapine, Nelfinavir). The specific objectives of an ART centre are to:1) provide Care, Support and Treatment services to all PLHIV and monitor patients in HIV care (Pre-ART) regularly 2) Identify eligible PLHIV requiring ART and initiate them on ART in a timely mal ART guidelines 3) Provide ARV & OIs drugs to eligible PLHIV 4) Provide treatment adherence and counselling services before and during treatment to ensure high levels of drug adherence 5) Counsel and educate PLHIV, care givers, guardians and family members on nutritional requirements, hygiene, positive living and also on measures to prevent further transmission of infection 6) Refer patients requiring specialized services (including admission) to other departments or higher facilities 7) Provide comprehensive package of services including condoms and prevention education with a view towards "Positive Prevention" 8) Ultimately integrating HIV care into general health system for long term sustainability. 26 . The regimen that are available in ART centre are:
2019-01-07T19:53:38.598Z
2018-06-17T00:00:00.000
{ "year": 2018, "sha1": "6f8bb0bee2a0627ca6fd4912c137445284ba717a", "oa_license": null, "oa_url": "https://www.banglajol.info/index.php/JBCPS/article/download/37036/25005", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "81679d7ec4188dad054c441565b4e643e739a0da", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
203847544
pes2o/s2orc
v3-fos-license
The Concept of Contradiction Finding and Classification in the Field of Marketing Communication Quality Management . This article presents new concept of a systematic approach to quality management of marketing communication. The major research objective was to develop a method which facilitates efficient quality management of marketing communication in a holistic manner. The elements of marketing communication are defined, and the subsystem of marketing information is distinguished. A qualitative model of marketing communication was developed referring to the basics of Qualitology and the principles of qualitative modeling, valuation and systemic approach. Using the ENV and OTSM model of TRIZ contradiction, the problem of qualitative contradictions in the marketing communication system was indicated. Grey Incidence Analysis and Theory of Correlation and Regression are used to identify the structure of marketing communication and to find contradictions within its structure. The inventive principles for solving contradictions in the field of marketing are indicated. The innovative aspect of the research consists of an application of qualitative modeling, methods of Grey System Theory and Theory of Correlation and Regression, and methods of Theory of Inventive Problem Solving. The method introduced enables the recognition and classification of contradictions on its impact for marketing communication quality. In the last section, the direction for further research in the field of systematic thinking application for marketing communication quality management is indicated. Introduction The work considers the problem of contradictions in the area of quality management of marketing communication of industrial enterprises.The increasing complexity of production processes and the pursuit of enterprises for innovation (product, process, organizational or marketing) requires the creation of relationships with relevant market players (Pacholski et al., 2011).In the activities of industrial enterprises, several kinds of relationships can be distinguished, among others (Mantura, 2015;Morgan and Hunt, 1994): relationships with the supply market (i.e., between the company and the suppliers and co-operators in the market supply), relationship with the sales market (i.e., between the company and the clients (agents and final customers) and co-operators in the distribution channel), relationships in the system of competition (i.e., between the company and competitors, current and potential, as well as industry and non-industry), relationships with the system of power (i.e., between the company and the institutions of power), relationships with the education system (i.e., between the company and the education institutions, relationships with research system (i.e., between the firm and research institutions), relationships with social system (i.e., inter alia, opinion-forming entities that affect the way in which the public perceives changes shaped by the company in diverse activities and situations).The management of the enterprise relationships in a complex and dynamically changing market structure requires from the management the so-called new generation competences (Duncan and Moriarty, 1998) allowing for the creation of cross-functional knowledge (Korhonen-Sande, Sande, 2014) and effective adaptation of the company to changing market environment conditions (Rajnoha et al., 2014).Shaping the relationships between the company and its market environment and the relationships between functionally diverse organizational units of the company is one of the basic functions of marketing (Gummesson, 1994).The key roles here are the processes of information transfer as part of marketing communication.External Marketing Communication, i.e., communication between organizational units of the company and market environment entities, ensures the understanding the intentions, capabilities and potential of the company's partners (Andersen, 2001).Internal Marketing Communication, i.e., communication between organizational units of the company, enables, among others, recognition of the surrounding market reality from different perspectives and contributes to the so-called synergy effect in shaping the targeted changes of the company and its environment (Lu et al., 2007).Internal Marketing Communication enables the integration of information shaped by functionally diversified employees of the company.Company employees are perceived as the so-called "part-time marketers" (Gummesson, 1987).A comprehensive approach to marketing information of company employees is to improve the actions taken to adapt the company to the market environment and to impact this environment in order to achieve its own goals.The aim of this paper is to develop the concept of Contradiction Finding and Classification for improving the quality of Internal Marketing Communication of an industrial enterprise.When developing the concept of contradiction finding and classification, three different theories based on the principles of system thinking were referred to, namely: 1. basics of Qualitology (Kolman, 2009;Mantura, 2010), in order to develop a qualitative marketing communication model; 2. basic of Grey System Theory (Liu et al., 2016), to order the relations between particular features of elements belonging to the marketing communication system; 3. basics of Theory of Inventive Problem Solving (Altshuller, 1996(Altshuller, , 1999)), in order to model and solve the problem of contradictions emerging during the improvement of the quality of marketing communication. The studies also refer to the basis of Correlation and Regression Theories, by applying the method of correlations analysis using the Pearson's linear correlation coefficient to check whether there is a problem of contradictions between the elements of marketing communication.Firstly, the paper explains the essence of qualitology, Theory of Grey Systems and Theory of Inventive Problem Solving points out the principles, methods and tools used in this work.Secondly, a qualitative model of the marketing communication system was developed, defining the set of its elements and their structure.Attention was paid to the problem of contradictions between the various qualitative categories of the marketing communication system, emerging at the stage of designing desirable quality changes.A method for classifying qualitative contradictions was developed, due to their importance determined by the impact of certain contradictions on the changes in the state of marketing information quality (cf.Cavallucci et al., 2011).Fourthly, methods and tools derived from the basics of TRIZ that support the resolution of current and significant qualitative contradictions to improve marketing communication of the company have been distinguished. Characteristics of the applied concepts of system thinking The use of methods, techniques and research tools of three different concepts that take the basis of a system approach, as well as methods of mathematical analysis of phenomena correlations, requires the indication of their essence and the relevance of applicability for solving specific problems in managing the quality of enterprise marketing communication.The subsequent subsections explain the essence, selected methods and tools of Qualitology, Theory of Grey Systems and Theory of Inventive Problem Solving. Qualitology Qualitology is an interdisciplinary field of knowledge dealing with all issues related to quality.Research referring to the fundamentals of Qualitology can be classified into two basic fields (Borys, 1984), i.e., Qualitonomy, as the descriptive field of the quality theory (Kolman, 2009;Mantura, 2010) and Qualimetry as the formal field in the quality theory, dealing with the use of numeric (mathematical-statistical) methods in quality theory and their application (Borys, 1984;Azgaldov et al., 2015).The general goal of qualitology is "to create a scientific basis for qualitative cognition and qualitative shaping of reality by man".The qualitative approach in researching and shaping objects is expressed in the application of the principles of a qualitative approach (Mantura, 2010), i.e., Principles of Qualitative Mapping, Principles of Anthropocentrism (humanocentrism), Principles of Complexity, Principles of Systemicity, Principles of Synergy, Principles of Kinetics, Principles of Probability, Principles of Evaluation, Principles of Optimization, Principles of Normalization and Principles of Economics.In the literature on Qualitology, it is assumed that the qualitative principles can be applied to every object being recognized or created (Mantura, 2010).In this paper, the Principle of Systemicity is applied, which consists in adopting the basics of general system theory and treating the object as a system.The system is a set of elements remaining in mutual relations.The principle of systemicity explains the "mysterious theorem" (von Bertalanffy, 1968) that "the whole is more than the sum of parts" (Aristotle), which consists in the fact that "constitutive characteristics are not explainable from the characteristics of isolated parts" (von Bertalanffy, 1968).It is assumed that the internal structure of the object is a set of relations between the qualitative categories (features and its states) belonging to the elements of this object (subsystem), and the external structure is a set of relations between qualitative categories belonging to the object and qualitative categories belonging to the objects in its surroundings (supersystem).For our conducted research, in order to identify the relations between the elements of marketing communication, the selected methods of Grey Systems Theory were used.This work also applies the Principle of Quality Evaluation, which consists in considering the need to transform the non-evaluated (absolute) quality of objects into evaluated quality.This is connected with the Principle of Anthropocentrism, i.e., axiological approach to reality and consideration of the quality of the object in relation to the system of human needs, values, goals and requirements.Quality evaluation is associated with the so-called phenomenon of Differentiation of Valued Quality.The reason for this phenomenon is the antinomy (contradiction) of the nature of the features evaluated on the basis of various criteria for assessing their value.This means that a feature belonging to an object can simultaneously take the form of: 1. maximizing, more-is-better (stimulant), i.e., a quantity that is beneficial for large values from the variability range of the feature; 2. minimizing, less-is-better (drawback, destimulant), i.e., a quantity favourable for small values from the variability range of the feature; 3. optimizing (mediment), i.e., the quantity used for intermediate values from the feature variability range (Kolman, 2009, p. 66). In the research conducted for the solution of the emerging problem of qualitative contradictions, reference was made to the fundamental postulate of classical TRIZ, postulate of contradictions and methods for modeling contradictions for business and management. Grey System Theory Theory of Grey Systems was introduced relatively recently in China, in 1982.It was created by a Chinese scholar, Professor Deng Julong, and presented in the publication entitled "The Control Problems of Grey Systems" (Liu and Lin, 2006;Cempel, 2014;Liu et al., 2016).It is assumed, like in control theory, that the darkness of colours is used to indicate the degree of clarity of information.The word "black" is employed to represent unknown information, "white" for completely known information, and "grey" for that information which is partially known and partially unknown (Liu and Lin, 2006).Research methods and procedures developed within the framework of Grey Systems Theory entitle to inference based on incomplete, uncertain and few information about the systems being studied (Liu and Lin, 2006).In the conducted research, the state of quality of particular elements in the marketing communication system is evaluated by the employees of a company's organizational units.Therefore, in the conducted research it is assumed that incomplete information means a limited, for pragmatic reasons, set of characteristics defining the individual elements in the marketing communication system.An uncertainty of information results from the human cognitive limitation and the experience of people who evaluated the state of the features belonging to the elements of the marketing communication system.Limited information refers to the research sample.Grey methods define the system mapping procedure based on the minimum sample n ≥4 (Cempel, 2014).On the basis of the concept of Contradiction Finding and Classification for improving the quality of marketing communication, Grey Incidence Analysis (GIA) method of Grey Relationship Analysis was applied.This methods refer to resolving problems such as which factors among the many are more important than others, have more effects on the future development of the systems than others, cause desirable changes in the systems so that these factors need to be strengthened or hinder a desirable development of the systems therefore they need to be controlled (Liu, Lin, 2006).For such a future research objective, the basis of cooperative grey games, to capture the dynamics of interaction among individual assessment of the quality of marketing communication, can be applied (Fang et all., 2010; Palancia et all., 2017). Theory of Inventive Problem Solving Theory of Inventive Problem Solving was developed by Genrich Saulovich Altshuller in the period from 1946 to 1998, for the need of systematizing methods of solving creative issues (Skoryna and Cempel, 2010).The essence of the developed concepts, in the most general terms, is to "help the inventor to use his current inventory of knowledge and experience most effectively" (Altshuller, 1975) by adopting a systematic approach to solving complex problems.Theory of Inventive Problem Solving (TRIZ) has been greatly developed and has demonstrated great efficacy in solving difficult technical problems.The current stage of TRIZ evolution and its popularity has been illustrated by world interest in TRIZ, intensity of TRIZ usage in industry and what its recognized area of application is, and how aware the world is of TRIZ compared to other innovation methodologies (Abramov and Sobolev, 2019).Initially, TRIZ was used only to solve technical problems, but over time its application has expanded into organizational, educational and social problems as well as the ones related to broadly understood business.For example, 40 Inventive Principles in Quality Management have been developed that include fields of quality standards, quality control, quality assurance, reliability, customer focus, supplier selection, project management, and improvement teams (Retseptor, 2003), 12 innovation principles for business and management (Ruchti and Livotov, 2001).It is pointed out that the application of TRIZ for business and management has worked in such areas of business operations as, e.g., increasing sales effectiveness, generating a new marketing concept, product or process, analysing customers behaviours and their preferences related with innovativeness of products, resolving a number of conflicts within a supply chain, discovering a new market for a service, predicting potential failures of a new business model, generating radically new advertising concepts, and risk management (Monnier, 2004;Souchkov, 2007;Regazzoni and Russo, 2011;Pryda et al., 2018;Renaud et al., 2018, Koziołek, 2019). As part of the TRIZ, the principles of guiding thinking in solving inventive tasks have been defined (just principles and not specific formulas and rules) for organizing creative thinking independently of the area of human activity (Altshuller, 1975).The classical TRIZ is based on three Fundamental Postulates, such as: first, postulate of objective laws, which means that engineering systems evolve not randomly but according to certain laws of evolution.Secondly, postulate of contradictions, which means that the inventive task is characterized by the fact that it requires the solution of the socalled technical, physical or administrative contradictions (Altshuller, 1975).These contradictions emerge when the improvement of certain system properties comes into conflict with another of its properties (Andrzejewski and Jadkowski, 2013).Altshuller states that the origin of any innovation problem is a contradiction: "Any problem, to be solved with TRIZ, must be formulated in such a way that it states a contradiction" (Khadija et al., 2019).Third, postulate of the specific situation, which means that each Inventive (non-typical, creative) problem arises within its own individual context (Khomenko and Ashtiani, 2007).Classic TRIZ solutions include ARIZ (Algorithm for Inventive Problem Solving), Inventive Standards, Substance-Field.A set of complementary theories originated by TRIZ can also be distinguished, i.e., Theory of Technical System Evolution (TRTS in Russian) and Theory of the Development of Creative Personalities (TRTL), and he promoted the initiative to build a General Theory of Powerful Thinking (OTSM), which helps with the development of powerful thinking skills (Altshuller andFilkovsky, 1975 in: Cascini, 2012).While developing the concept of contradiction finding and classification, in order to improve the quality of marketing communication, the contradiction toolkit (Gadd, 2011) and the ENV model were adopted, taking into account the need to classify contradictions regarding their importance (Cavallucci et al., 2011) to positive qualitative changes in marketing communication.Contradiction modeling has been applied to identify or solve, e.g., engineering, education, managing, knowledge management, sociological problems (Messaoudene, 2018; Nakagawa; 2018; Slim et al., 2018;Livotov et al., 2019).As a result of the analysis of current research work, it can be concluded that selected TRIZ methods and tools are used to solve problems in the area of marketing (Semenova, 2004), the need to consider the role of marketing in engineering creativity, especially relationship marketing, which supports integration of different sources of information (Belski et al., 2019) is indicated.Furthermore, attention is drawn to the diverse problems emerging in TRIZ application in marketing management and the need to conduct research that may lead to their solution (Zouaoua et al., 2010).Research is also being conducted in which TRIZ combines with uncertainty methods such as Grey Relation Analysis (Lin et al., 2011) or Fuzzy Logic (Su and Lin, 2008) and indicates TRIZ application to deal with volatility, uncertainty, complexity, and ambiguity in the world (Kiesel and Hammer, 2018).The innovative aspect of the concept of Contradiction Finding and Classification proposed in this paper is the application of the principles and fundamental operation of quality, methods of Grey System Theory and the Theory of Correlation and Tegression and the methods of the Theory of Inventive Problem Solving for improving the quality of marketing communication. Marketing communication system In an etymological sense, the term "communication" derives from the Latin communicare (i.e., to be in a relationship with, participate in, associate with).Goban-Klas (2001, p. 43) indicates that for some authors communication denotes all forms of information transfer, both between people and between animals and machines.In this work, the approach used in social sciences, especially in sociology, is adopted, and the scope of marketing communication is limited to the transmission of information between people and market players.Marketing communication is the transmission of marketing information between entities on the market (Mantura, 2012;Wiktor, 2013).In our research, the Internal Marketing Communication of the company is considered which, taken individually occurs under the name of the Marketing Communication Process.This process illustrates the transmission of marketing information between the organizational units of the company.The marketing communication process is mapped by a specific arrangement of elements such as (see Lasswell, 1968; Westley, MacLean, 1957; Mantura, 2012): entering the marketing communication process (i.e., source of marketing information, reality components, market objects), the subject of action in the marketing communication process (i.e., organizational units of the enterprise, which are characterized by a specific semiotic system, a conceptual thesaurus and knowledge), tools of action in the marketing communication process (i.e., marketing communication channel and marketing communication tools, means of communication), object of action in the marketing communication process (i.e., marketing information, content, message), result of actions in the marketing communication process (i.e., the message of marketing information received), the exit from the marketing communication process (i.e., the recipient's reaction, and the qualitative change in the market situation).Internal Marketing Communication occurs in a specific organizational culture of the company.Specifying the term organizational culture, for research purposes of this work, the approach according to E.H. Schein (2004) is adopted.He indicates that the essence of organizational culture is a set of basic beliefs that have been established or adopted in order to solve the difficulties faced by the organization, adapting to external conditions and internal integration.Treating the organizational culture as a product of social interaction, Schein distinguishes its three levels: basic assumptions (which form the foundation for other cultural components, define the essence of existence, human nature, reality and the perception of truth), norms and values (which constitute a set of principles of everyday activities of group members, which are shaped by the impact of dominant values and thanks to them, the group members know how to cope in specific situations), and artefacts (which are manifestations of culture, but do not constitute its essence).The basic assumptions are defined, e.g., by the attitude to the environment, explanation of the nature of reality, beliefs about human nature, human activity, interpersonal relations.Standards and values are determined by, among others, values and norms declared, values and norms observed.Artefacts are determined by language artefacts, behavioural artefacts, and physical artefacts. 4 The quality of the marketing communication system According to the basics of Qualitology, the work adopts an epistemological (descriptive) definition of quality and the axiological criterion of the value of objects, defining the valued (relative) quality of objects.In this approach, the quality is expressed in a set of features, and the quality of an object is a set of features belonging to it (Mantura, 2010, p. 49).Determining the quality of any object consists in recognizing, postulating and formulating a set of features belonging to it.The quality of the object is described by a finite set of features.The quality of the object is treated in a holistic approach, i.e., it is expressed by a set of features that belong to it and their structure.In fact, the features are identified on objects only in the form of specific own conditions/states.The state of quality of the object determines at least one state of each feature belonging to it.Conceptualization of features belonging to the object and their states in the Value Relation (Rv) with a defined system of human needs, goals and requirements is the basis for transforming the quality of the object into a valued (relative) state of the object's quality.The general and universal criterion of quality evaluation is the effectiveness of satisfying the set of needs, achieving goals and meeting human requirements (Mantura, 2010).The overall marketing communication quality model is presented in Figure 1.Value Relation refers to the relationship between the state of marketing communication quality and usability in achieving the goals and functions of marketing in the company's organizational culture.According to the paradigm of systemic thinking adopted in the work, the goal of the existence of a given system (i.e., marketing communication of a company) is determined by the objective of its environment, the supersystem it belongs to (i.e., the supersystem of marketing in the organizational culture of the company).Relation of Impact expresses the so-called causality ratio (cf.Kotarbiński, 1975) between elements occurring in the form of causes and elements occurring in the form of effects.In this paper, referring to the basics of information theory, it is assumed that the state of the quality of marketing information is influenced by features belonging to the marketing communication channel.The adequacy of these assumptions has been empirically verified in the authors' earlier research work, concerning the development of Integrated Marketing Communication Quality Management Method in the industrial company.The qualitative model of marketing information was tested by thirty industrial companies and the relation between the marketing communication channel (i.e., frequency of using a specific form of marketing communication channel, such as, advertising tools, public, sales activation, direct marketing, personal sales, personal promotion and partnership with market entities, or internal marketing communication tools) and the states marketing information quality was confirmed (Majchrzak, 2018).In this paper, the marketing communication channel is defined only by the methods of internal marketing communication, such as, personal meetings, phone calls, e-mail messages, interactive multimedia communication.Correlation allows to recognize whether the change (increase / decrease) in the state of features belonging to the communication channel is accompanied by the change (increase / decrease) in the state of features belonging to marketing information.Recognizing the direction of correlation makes it possible to check whether a given feature belonging to organizational units or marketing communication channels is an feature of a minimizing nature (the smaller the value the better) or the maximizing (the higher the value the better) in relation to a given feature belonging to marketing information.In order to identify the problem of qualitative contradictions in the marketing communication system, the selected elements were determined in accordance with ENV model (Element, Name of the property, Value of the property), which is a universal model adopted in OTSM-TRIZ for representing any kind of problematic situation (Cascini, 2012).The results of modeling the quality of marketing information and the marketing communication channel based on the ENV model are summarized in Table 1.The research assumes the interrelation between the specified elements of the marketing communication system.It is assumed that changing one part of the system has a negative effect on other parts of the system (Altshuller, 1994).The problem of contradictions in the marketing communication system is determined using the model of a contradiction, consists in at least three parameters (Khomenko et al., 2007), where: SUPERSYSTEM OF ORGANIZATIONAL CULTURE 1. Evaluation Parameters (EP), constituting a measure of system satisfaction requirements, refer to the state of the at least two features of marketing information; 2. Control Parameter (CP) whose value impacts, with opposite results, both the Evaluation Parameters, refers to the state of features of the marketing communication channel.Contradiction occurs when two Evaluation Parameters are coupled in such a way that the attempt of improving any of them determines the worsening of the other (Cascini, 2012), which is shown in Figure 2. When developing the concept of Contradiction Finding and Classification, the set of contradictions between the various features of the marketing communication channel and the features of marketing information is defined first.Then, the contradictions according to their importance are classified for positive changes in the quality of marketing information in a comprehensive approach. Stages of contradiction finding and classification Contradictions between specific elements of marketing communication are identified by recognizing the Relation of Impact and Correlation between the various features of the marketing communication channel and marketing information.Another method of contradiction finding based on functional analysis was presented, e.g., in the problem product from the function of the product (Yang et al., 2018).In order to recognize the implementation of the influence, the method of studying Grey Relations of the Grey Systems Theory was used (Liu, Lin, 2006).This method allows to check whether and what is the strength of the impact of particular system factors (CP: control parameters) on specific system (EP: evaluation parameters) (Liu, Lin, 2006, p. 120). The level of impact of system factors on the nature of the system is identified by calculating the ratio of the absolute (total) impact εij.The coefficient of impact εij between the vectors of variable factors, Xi, and the characteristics, Yj, of the system was chosen due to its following properties, significant in the study of the impact relationship between the specific elements of the marketing communication system (Liu.Lin, 2006, p. 112; Cempel, 2014, p. 15): εji is only related to the geometric shapes of Xi and Yj, and has nothing to do with the spatial positions of Xi and Yj or in other words, moving horizontally does not change the value of the absolute degree of grey incidences; 2. any two sequences are not absolutely unrelated, that is εji never equals zero, 3. the more Xi and Yj are geometrically similar, the greater εij; when Xi and Yj are parallel, or 0 vibrating around 0 with the area of the parts with 0 on the top of 0 being equal to that of the parts with 0 beneath 0 , εij=1; 4. when any one of the data values in Xi or Yj changes, εij also changes accordingly; 5. when the lengths of Xi and Yj change, εij also changes accordingly; 6. εii = 1, εjj = 1; 7. εji = εij. In studies on grey relationship, different types of sequences of variable factors and system characteristics were specified, including: behavioural sequence (sequence of variables versus system observation conditions, e.g., evaluation of the same variable by different experts, in different conditions), behavioural time sequence (evaluation of the same variable determined at different times), behavioural criterion sequence (e.g., different system characteristics are evaluated by various experts), behavioural horizontal sequence (e.g., the value of a given characteristic is determined for different objects) (Liu, Lin, 2006, p. 88).The individual calculation activities carried out in the method of testing grey relationships between control and evaluation parameters are explained below. Here, XiD: the image of the i-th observation vector of the system behavior transformed with respect to the operator of the zero starting point, YjD: image of the j-th observation vector of system behaviour transformed against the zero starting point operator, i -system factor (i.e., control factor), j: system characteristics (i.e., evaluation factors).Operation 3. Calculation of behaviour measures of system observation vectors by adding, subtracting and the quotient of their values (Liu, Lin, 2006, 104): Here, | |: measure of the behaviour of the i-th factor of the system, | |: a measure of the behaviour of the j-th system characteristic; | − |: measure of the behaviour of the i-th factor systems relative to the j-th characteristic of the system.Operation 4. Calculation of the value of the impact coefficient εij between specific factors and system characteristics (Liu, Lin, 2006, 103): Here, εij : coefficient of absolute level of impact between the i-th factor of the system, and j-th system characteristic.The coefficient of the absolute level of influence, εij, takes values within the range of variability [0,1].The higher the value of the coefficient, the greater the impact of the i-th factor of the system on the j-th characteristic of the system.Operation 5. Adding the values of impact coefficients εij for each of the system's factors and ordering the system's factors in relation to the strength of their impact on the system's characteristics (cf.Liu, Lin, 2006, 130).This leads to the ordering of features belonging to the communication channel in relation to the strength of their impact on the state of features belonging to marketing information.Example: Thus, the feature 1 ℎ (personal meetings) has the greatest impact on changes in the quality state of marketing information.The purpose of the next calculation activities is to recognize the direction of correlation between the states of marketing communication features and the states of marketing information features.To accomplish this goal, the method of analysis of phenomena correlations using the Pearson's linear correlation coefficient was used.The individual computational activities carried out in the method of analysing the phenomena correlations are explained below.Operation 6. Calculation of the value of the Pearson correlation coefficient r according to the formula (Rutkowski, 2004, p. 30 Thus, referring to the example presented it was recognized, that for features 1 ℎ and 3 ℎ , there is no opposite correlation with the features belonging to marketing information.Therefore, to improve the quality of marketing information, the value of the feature 1 ℎ should be increased and the value of the feature 3 ℎ belonging to the marketing communication channel should be reduced.The occurrence of contradictions for the feature 2 ℎ and feature 4 ℎ was identified: Add the values of impact coefficients ε ij for each of the system's factors and order the system's factors in relation to the strength of their impact on the system's characteristics.When designing qualitative changes, first of all it is considered what should be the state of those control parameters which have the strongest impact on the changes in the evaluation parameters.In accordance with the assumption that those elements of the system which have the greatest impact on changing the desired features of the system should be changed in the first place (cf.Gadd, 2011, p. 104).The course of the proposed operations and the steps applied in concept of contradiction finding and classification in the field of marketing communication quality management is shown in Figure 3.The operations refer to the specified methods of Grey Relation Analysis and Correlation Analysis.The results of specific operations are applied to achieve objectives of proposed steps in contradiction finding and classification.In solving the problem of contradictions, the problem of physical contradictions is first considered and a set of principles of separation applied, and then it refers to a set of appropriately selected inventive principles for solving problems of technical contradictions (cf.Gadd, 2011, pp. 120-134).A specific set of inventive principles is of general nature and should be adapted and applied taking into account the specifics of the problem being resolved (Altshuler, 1975).In solving the problem of qualitative contradictions in the marketing communication system, interpretations of standard inventive principles are used to solve problems in the area of marketing (Retseptor, 2005). Conclusion and Outlook This paper deals with the problems of identifying and classifying contradictions for the purpose of managing the quality of marketing communication of an industrial enterprise.In our developed method, reference was made to the basic principles and operations of Qualitology, i.e., the Principle of Qualitative Mapping, Principle of Systematicity, and Principle of Quality Evaluation.At the stage of qualitative contradictions modeling, the ENV and OTSM models of TRIZ contradiction were used.The Evaluation Parameters (EP) in the conducted research refer to the state of the marketing information features, and the Control Parameter (CP) refers to the state of features of the marketing communication channel.In order to determine the impact relation between Control Parameter (CP) and Evaluation Parameters (EP), Grey Incidence Analysis method was used.In order to determine the direction of correlation between Control Parameter (CP) and Evaluation Parameters (EP), the method of analysis of phenomena correlations using the Pearson's linear correlation coefficient was applied.It has been assumed that when designing qualitative changes, the most important issues is to consider what should be the state of Control Parameters, which have the strongest impact on the changes in the Evaluation Parameters in the overall approach.It was pointed out that by solving the problem of qualitative contradictions, a set of separation principles is applied first, and then it refers to the interpretation of standard inventive principles developed for solving problems in the area of marketing.The course of this research process is shown in Figure 4.The aim of future research is, first, to verify the developed concept, including designing a questionnaire to assess the state of the quality of marketing communication in an industrial enterprise.Second, creating the computer software supporting computational activities at the stage of using Grey System Theory method and a statistical method of analysing the phenomena correlations.Third, integration of the developed method with other commercially available methods of computer-aided problem solving (e.g., Innovation WorkBench).Fourth, application of a mathematical model, such as system dynamics approach methodology of modern Operational Research (Pedamallu et al., 2012a) and methodology and computer simulation modeling technique (Pedamallu et al., 2012b) to improve the classification of the contradiction for its impact on the marketing communication quality.Furthermore, application and verification of proposed concept of contradiction finding and classification in the other fields of industrial companies structure. Here, fIm1, fIm2, ..., fIm10: the features belonging to marketing information; fCh1, fCh2, ..., fCh4: the features belonging to the marketing communication channel; Rv: Value Relation; Rw: Relation of Impact; Rk: Correlation between the states of individual features belonging to the communication channel and the states of features belonging to the marketing information. xi, yj: particular variables of vectors Xi and Yj, − , − : mean values of variables, , : standard deviation of independent variables x and dependent variables y, and N: population size.The Pearson correlation coefficient r takes values in the range of variability from -1 to 1.The sign of the r Person correlation coefficient informs about the direction of correlation.Positive correlation means that the features of the marketing communication channel are the characteristics of the maximal character, and the negative correlation indicates the characteristics of a less-is-better nature in relation to the features of marketing information.The recognized character of features is put together in the so-called Contradiction Matrix.Here, CM: contradiction matrix; ↑: the feature of the communication channel of a maximizing character; ↓: minimizing in relation to individual features of marketing information. 1 . 4 . Determine the sequence of variable factors (X 1 , X 4 ) and characteristics (Y 1, Y 10 ) of the system.1. Define Control Parameter (CP) refers to the state of features of the marketing communication channel (X 1 , X 4 ) and Evaluation Parameters (EP), refer to the state features of marketing information (Y 1, Y 10 ). 2. Select the system operator (e.g.zero starting point, initialing operator, average image, interval image) and transform the sequence of variable factors and characteristics into the image of them.3. Calculate behaviour measures of the images of sequence of variable factors and characteristics by adding, subtracting and the quotient of their values.Calculate of the value of the impact coefficient ε ij between specific factors and system characteristics. 2 . Identify the value of the impact obetween specific Control Parameters (CP) and Evaluation Parameters (EP). 3 . Order the Control Parameters (CP) in relation to the strength of their impact on the Evaluation Parameters (EP).6. Calculation of the value of the correlation coefficient between specific factors and system characteristics.4. Identify the direction of Correlation Between Control Parameters (CP) and the Evaluation Parameters (EP).Negative correlation Means that the Control Parameters (CP) has the less-is-better nature in relation to specific Evaluation Parameters (EP).Positive correlation means that the Control Parameters (CP) has the maximal character in relation to specific Evaluation Parameters (EP). Fig. 3 . Fig. 3.The course of the operations and steps in contradiction finding and classification in the field of marketing (Source: own elaboration). Grey Incidence Analysis Analyse relations of impact between Control Parameters and Evaluation Parameters Theory of Correlation and Regression: Correlation Analysis Analyse corelation between Control Parameters and Evaluation Parameters TRIZ: OTSM model of TRIZ Contradiction Modeling GST: Grey Incidence Analysis Theory of Correlation and Regression: Correlation Analysis Contradiction Classification TRIZ: Separation Principles and 40 inventive principles for marketing sales and advertising Solving Contradiction Fig. 4 . Fig. 4. The course of this research process in contradiction finding and classification in the field of marketing (Source: own elaboration). Table 1 . Qualitative modeling of marketing communication according to ENV model. 1. increasing the value of the feature 2 ℎ positively affects the state of the feature 10
2019-10-08T13:07:47.856Z
2019-10-09T00:00:00.000
{ "year": 2019, "sha1": "d18a68c92d87edf583d2c74f82c113d6145d8ebd", "oa_license": "CCBY", "oa_url": "https://hal.inria.fr/hal-02905547/file/489563_1_En_35_Chapter.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "079d254dfcff2489e0ef0500d227d8dda29c7625", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Computer Science" ] }
239654325
pes2o/s2orc
v3-fos-license
“A longitudinal analysis of tax planning schemes of firms in East Africa” Taxes play a significant role in the social and economic development of counties. On the other hand, taxes represent a significant cost to firms; hence they devise legal ways to reduce their taxes through tax planning. In East Africa, the statutory tax rate of firms averages 30%, which is considered a major burden to the firms. As a result, this study aims to longitudinally examine the tax planning practices of listed firms in East Africa countries (EACs). The study used twelve-year annual reports of ninety-one firms from EACs. Both cash effective tax rate (CEFR) and accounting effective tax rate were employed as tax planning measures. Descriptive statistics together with Wilcoxon signed-ranked test were used to analyze the results. The study demonstrates the exis- tence of corporate tax planning by the listed firms in EACs. The average CETR of the firms was 17% as opposed to the statutory tax rate of 30%, demonstrating that the firms actively engage in tax planning activities. The evidence further demonstrated a gradual decrease in the tax planning activities of the firms over the past twelve years. The study further found out that the rates of decline in the firms’ tax planning were statistically insignificant. Despite the decrease in the firms’ tax planning, the tax authorities in EACs should enforce tax laws to eliminate the tax planning problem. effectively engage in corporate tax predominant assumption of shareholders do taxes represent a substantial to companies; hence any tax activity reduces a firm’s tax liabilities is to increase the value of the that tax planning is not costless. It was stated that tax planning is asso-ciated with many and which comprise potential punish-ments as and from tax Apart from the risk of for in tax it can arising from its implementation, legal and reputational which can influence the value of of tax, various governments and shareholders would like to see the trend of the effective tax rate. This study examines the tax planning activities of listed firms in East Africa countries (EACs). Additionally, the study investigates the responsiveness of the tax planning activities to the tax policy reforms (adminis-trative and technological tax reforms) implemented by EACs during the period under the study. This study alerts the governments on the existence of tax planning activities among listed firms in EACs. This will form a base for the governments and regulatory authorities to come up with the appropriate policies and regula-tions that will ensure governments collect their revenue and also ensure that investors are protected. INTRODUCTION Firms are increasingly finding ways to reduce costs, maintain more profit for investment opportunities and increase their values. Among the strategies to achieve these objectives, tax planning represents a major activity that takes a large part of management time and resources (Lee, 2020). This is because tax erodes a significant percentage of firms' income. This makes tax planning a crucial strategy employed by a business in the contemporary corporate environment (Hanlon & Heitzman, 2010;Heitzman & Ogneva, 2019); however, the question is, do firms effectively engage in corporate tax planning? The predominant assumption of shareholders is that they do because taxes represent a substantial burden to companies; hence any tax activity that reduces a firm's tax liabilities is considered to increase the value of the firm (Jacob & Schütt, 2020; Kirkpatrick & Radicic, 2020). Nevertheless, Jacob and Schütt (2020) contended that tax planning is not costless. It was stated that tax planning is associated with many costs and risks, which comprise potential punishments such as fines and penalties from tax authorities. Apart from the risk of a firm being fined or punished for engaging in tax planning, it can also create costs arising from its implementation, legal fees, and reputational loss, which can negatively influence the value of firms that engage in such practices. Similarly, agency theorists argue that, due to agency costs that arise due to shareholders-management relationships, management may misuse tax planning decisions (Graham et al., 2014;Maama & Mkhize, 2020;Putra et al., 2018). This is so because managers may have personal incentives to implement tax planning in ways that are different from the expectations or preferences of the shareholders, just to achieve their interests. Accordingly, Campbell et al. (2020) emphasized that tax planning is a complex activity that can create room for managerial rent diversion, which can eventually erode the value of shareholders' investment. The known advantage of tax, particularly in developing nations, is that it helps governments to implement various strategies to enhance their tax collection capacities (Armstrong et al., 2019). For instance, the East African governments, like most other developing nations, have implemented various tax policy reforms to boost their tax revenue. The tax reforms, which have been instigated, include the establishment of revenue authorities, the establishment of large taxpayers' departments, and the digitalization of the revenue collection. Apart from these, the information-sharing agreements among EACs, strong deterrence mechanisms, and taxpayers' education are among other strategies that EACs governments have implemented to increase tax revenue collection. Despite the significant contribution of tax to the development of economies, it decreases firms' resources and investment opportunities. As a result, owners would want to see their companies pay the minimum amount of tax possible. Hence, they employ competent management to manage their businesses on their behalf. Managers are entrusted over the firms' resources to create value for the shareholders. One major strategy that management uses to improve shareholders' value is tax planning (Hanlon & Heitzman, 2010; Tang, 2019). Since tax represents an erosion of firms' value, investors would like to see the downward trend of the effective tax rate, suggesting that their firms would pay fewer taxes than what they would otherwise be. On the other hand, governments strive to increase their revenue collection to meet their activities because tax revenue is the main source of the funds that finance the social and economic activities of the government (Marimuthu & Maama, 2021). Thus, for the above-highlighted importance of tax, various governments and shareholders would like to see the trend of the effective tax rate. This study examines the tax planning activities of listed firms in East Africa countries (EACs). Additionally, the study investigates the responsiveness of the tax planning activities to the tax policy reforms (administrative and technological tax reforms) implemented by EACs during the period under the study. This study alerts the governments on the existence of tax planning activities among listed firms in EACs. This will form a base for the governments and regulatory authorities to come up with the appropriate policies and regulations that will ensure governments collect their revenue and also ensure that investors are protected. EMPIRICAL LITERATURE REVIEW Firms engage in tax planning strategies by using their complex group structures to reduce their tax burden. This is seen by many as morally repressible. However, such practices are not illegal because firms use the gaps in tax laws to reduce their tax liabilities (Lisowsky, 2010 AIM OF THE STUDY This study aims to examine the longitudinal tax planning strategies of firms in EACs. Therefore, the study provides empirical evidence of the existence of tax planning practices in EACs. The following are the specific objectives of the study: 3. To examine the effect of tax reforms on the tax planning activities of the firms in East Africa. Data and data source This study uses the sample of listed firms in East African countries (EACs), comprising Kenya, Tanzania, and Uganda. The data used in this objective are firms' tax expenses and other taxation information. The firms' taxation data were obtained from the financial statements of the firms. The financial statements were obtained from the financial stock markets and the annual reports from the companies websites. . However, these two sources of tax data are highly correlated since they are drawn from the firms' profit (Graham & Mills, 2008). However, Plesko (2004) views tax returns as the source that provides accurate tax planning data but is confidential and not easily accessible. Given this, the tax planning data for the study were sourced from the financial statements of the firms. Financial statements are also a good source of tax planning data because they are easily accessible and reliable as they are audited by independent and competent auditors. This study used an effective tax rate (EFR) to measure tax planning. The ETR was measured as the ratio of a firm's tax expense to its income before tax (Hanlon & Heitzman, 2010). Therefore, the effective tax rate measures the ability of a company to minimize its tax liabilities. This is indicative of the relative tax burden across firms. The firms with lower effective tax rates are said to be more tax aggressive compare with the firms with a higher effective tax rate. The effective tax rate can be categorized into cash effective tax rate (CETR) and accounting effective tax rate (AETR). As a result, this study uses both CETR and AETR to measure tax planning. Definition and measurement of variables The CETR is computed as the ratio of cash taxes paid to pre-tax accounting income ( Data analysis method The study employed descriptive statistics together with Wilcoxon signed-ranked test to analyze the results. The descriptive statistics tests such as mean were used to present the results of the trend and level of tax planning activities of the firms in EACs. In addition, the Wilcoxon signed-ranked test (WSRT) was used to check if there was any significant change in the level of tax planning activities of the firms. Thus, the trend in the firms' tax planning practices was analyzed based on the moving average score for every year to demonstrate whether there was any variation in the tax planning levels. In addition, the p-values were obtained from the WSRT to explain whether there were significant changes in the tax planning across the years. The level and trend of tax planning in East Africa This section presents the results of the level and trend of tax planning activities by the firms listed in EACs. The study uses both CETR and AETR as tax planning measures. The study used a line graph to demonstrate whether the tax planning activities of the firms increased, decreased, or remained constant over the twelve years. The line graph depicts the firms' tax savings, which is the difference between the AETR and the CETR, which provides an accurate measure of the actual benefit emanated from the tax planning activity. Besides, a Wilcoxon signed-ranked test was used to examine whether the difference in the tax plan-ning activities of the firms changed significantly over the years. Figure 1 presents the result of the level and trend of tax planning activities by the firms in EACs. Table 1 also presents the WSRT results of the level of significance of the changes in the firms' tax planning over the years. The results show a gradual increase in tax planning activities in EA for the past twelve years. This is demonstrated in all the two measures of tax planning. Both effective tax rates have been slightly increasing over the past twelve years, which indicates an increase in tax planning activities. The descriptive statistics results in Figure 1 show that the mean value of cash effective tax rate (CETR) in 2008 was 21.7% whist the accounting effective tax rate (AETR) was 26.3%. The results indicate that, on average, the listed firms in EA pay almost Figure 1 shows that the first paid less tax (4.5%) than what they were required to pay, which further emphasize tax planning in 2008. The evidence further shows that the CETR and AETR were 21.6% and 27.8%, respectively, in 2009. This suggests an increment in AETR and a decrease in CETR. This result indicates that the firms' tax liabilities on their profits marginally increased in 2009; however, the percentage of tax paid decreased. Once again, an average tax savings of 8.4% and 2.2% were recorded by the firms, given that the average tax rate in these countries is 30%. The WSRT results show that although there were changes in both CETR and AETR, they were statistically insignificant. In 2010, the CETR and AETR of the firms further decreased to 19.8% and 26.87%, respectively. These results estimate the tax savings by the firms in EA firms. Concerning the AETR, the results indicate that, on average, the listed firms in EA saved 3.2% of the pre-tax earnings to the governments in 2010. However, in the same year, the tax savings by the firms concerning CETR was 10.2%, which was far more than that of AETR. Figure 1 further shows that the difference between the CETR and AETR was 7.0%, which indicates a further tax planning activity. The Wilcoxon signed-rank test results show that the change in the tax planning activities of the firms was statistically significant for AETR (p < 0.05) and statistically insignificant for CETR (p > 0.05). In 2011, the tax planning activity of the firms decreased, in respect of AETR (28.8%) method, in which the tax savings decreased to 1.1% whilst the CETR tax savings increased to 10.8%. Once again, the level of change was statistically significant for the AETR (p = 0.008), as opposed to the CETR (p = 0.514). Moreover, in 2012, the CETR of the firms decreased to 17.7%, which resulted in a tax savings of 12.3%. Again, the AETR in 2012 was 27.2%. This also resulted in a tax savings of 2.8%. This result suggests that the firms increased their tax planning activities. However, the WSRT result shows that the increment level in the firms' tax planning was statistically insignificant (p > 0.05) for both methods. Similarly, the firms increased their tax planning activities in 2013, evidenced by the decline in CETR and AETR of 16.5% and 26.3%, respectively. This results in tax savings for CETR (13.5%) and AETR (3.7%). This suggests that the firms increased their tax planning activities in 2013. The WSRT results further indicate that the change in the AETR in 2014 was statistically significant (p = 0.031), whilst the change in the AETR was statistically insignificant (p = 0.17). The result further demonstrates that in 2014, the AETR and CETR increased to 28.3% and 19.4%, respectively. Nonetheless, the tax savings for AETR and CETR were 1.7% and 10.6%, respectively. However, the Wilcoxon signed-rank test results show that the level of increment in the tax planning activities for both methods was statistically insignificant (p > 0.05). Furthermore, both tax planning measures indicate that there was the existence of tax planning activities in EA in 2015. In fact, the CETR (16.0%) and AETR (28.0%) tax savings of the firms increased in 2015 to 14.0% and 2.0% respectively. It can further be ascertained from Table 1 In 2019, the CETR of the firms was 13.0%, as opposed to the AETR of 23.3%. This resulted in a tax savings of 17.0% and 6.7% for CETR and AETR, respectively. Once again, the difference between the tax liability and the tax paid by the firms was 10.3%. This represents significant tax savings in 2019, which is a testament to tax planning among the firms. These results demonstrate that the increments in the tax savings in 2019 for CETR were statistically significant (p = 0.034) whilst that of the AETR was not (p = 0.108). The results have shown that there was an increasing trend in tax planning by the firms in EACs. This suggests that the firms aggressively deploy means to reduce their tax liabilities. This result demonstrates that the governments in the EACs have failed to institute pragmatic measures that would minimize tax planning or tax avoidance. This suggests that the tax policy reforms established by the various governments have not achieved their desired objective of reducing tax planning. For instance, the East African governments implemented various tax policy reforms in 2012 to boost their tax revenue. The tax reforms included the establishment of revenue authorities, the establishment of large taxpayers' departments, and the digitalization of the revenue collection. Apart from these, the EACs agreed to share information to reduce tax avoidance and evasion. It must be admitted that the objective of these tax policies was not solely to minimize tax planning; however, since corporate tax represents a major component of tax income, it is expected that such policies would curb the incidence of tax planning. The possible reason for the inability of the tax reforms and policies to reduce the level of tax planning by the firms is that the firms may have also developed strategies to reduce their tax planning. The reforms may have motivated the firms to engage the services of professional and expert tax consultants to assist them in their planning. One point to note is that if the governments in EACs are unable to use tax reforms to reduce the firms' tax planning, they must enact laws that would severely punish firms and their management that would engage in tax evasion. Once again, the results show that the management of the firms was more concerned about increasing their financial performance and increasing the value of shareholders' investment at the expense of their image. This is because firms that pay the required tax are regarded as responsible and receive public acceptance. These results suggest that the firms in EACs considered that there was no reputation loss from tax planning. This shows that legitimacy theory is not significant to explain the planning activities of firms in EACs. Agency theory can be used to explain the tax planning activities of the firms because the management of the firms used tax planning as a tool to prove to management that they work to pursue their interests. This is plausible because tax represents an erosion of firms' value; investors would like to see the downward trend of the effective tax rate, suggesting that their firms would pay fewer taxes than what it would otherwise be. These results confirm the findings of previous studies such as those of Drucker (2010) and Duhigg and Koscieniewski (2012). These studies found that the tax planning of firms in the US has increased over the years. However, the result contradicts the findings of Boffey (2017) and Garside (2016), whose evidence demonstrated that firms in Europe are unable to use tax planning to significantly reduce their tax burden. CONCLUSION AND RECOMMENDATIONS The study used both moving averages and the Wilcoxon signed-ranked test to examine the level and trend of tax planning of listed firms in East Africa countries (EACs). The study found that there was the existence of corporate tax planning among listed firms in EACs. The results further show that there was a gradual decrease in the trend over the past twelve years. However, the level of decline was not statistically significant. The evidence showed that the tax planning activities of the firms decreased from 2008 to 2019. As the EACs are eager to become middle-income countries in between 2025-2030 years, the practices of tax planning among big and multinational companies may impede government efforts to collect domestic revenue to meet their developments goals. Though study results evidenced the decrease in the trend of tax planning, the study recommends that tax authorities should implement additional tax enforcement mechanisms which may eliminate the tax planning problem. However, the factors that influence these firms were not addressed, which provides an opportunity for further research. The study recommends other studies to investigate the factors that influence the tax planning activities of the firms in EACs.
2021-10-21T16:23:25.463Z
2021-09-06T00:00:00.000
{ "year": 2021, "sha1": "4b31fbbe930d2f00324a3c562f342e38619faeaa", "oa_license": "CCBY", "oa_url": "https://www.businessperspectives.org/images/pdf/applications/publishing/templates/article/assets/15474/IMFI_2021_03_Kimea.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "33a0a8d827158342bbc8c4748e4ff233b7eed675", "s2fieldsofstudy": [ "Business", "Economics" ], "extfieldsofstudy": [ "Business" ] }
214287413
pes2o/s2orc
v3-fos-license
Giuseppe Sarti and the topos of the tragic in Russian music Praising Glinka’s A Life for the Tzar as an inauguration of Russian music, Vladimir Odoevskii emphasized that its composer succeeded in elevating the figure of a simple peasant to the realm of tragedy. Odoevskii’s claim thus embodied the plea for the manifestation of tragedy based on a nationally-driven socially and culturally significant idea, conveyed through a sublime mood. Adopting this domain was indeed a long journey for music in Russia. The present essay traces forerunners and emerging elements of tragedy and its musical implementation back to the last decade of the eighteenth and the beginning of the nineteenth centuries, with an emphasis on Giuseppe Sarti’s (1729–1802) impact on the adoption of the tragic and the sublime in Russian music. A survey of Sarti’s stage works for the 1780–90s reveals his preferred pattern of conveying a sublime atmosphere of classicistic tragedy through unmeasured text and declamation — whether notated or not — located in the cathartic points of the drama, in conjunction with unison chorus and illustrative elements in the orchestra. Sarti’s contribution to the marriage of European neo-classicism with local trends and the domestication of tragedy in Russian music is demonstrated through a survey of declamation in recitatives and melodrama styles in works by Evstignei Fomin, Stepan Davydov, Stepan Degtiarev, and Osip Kozlovskii. Praising Glinka's Zhizn' za Tsaria as an inauguration of Russian music, Vladimir Odoevskii emphasized that its composer succeeded in elevating the figure of a simple peasant and the tunes of plain folk to the realm of tragedy [1, p. 126]. Odoevskii's claim thus emphasized the plea for basing the manifestation of tragedy on a nationally-driven, socially and culturally significant idea, conveyed through a sublime mood. Adopting this domain was indeed a long journey for music in Russia. The present essay traces the forerunners and emerging elements of tragedy and its musical implementation back to the last decade of the eighteenth and the beginning of the nineteenth century, with an emphasis on Giuseppe Sarti's impact on the adoption of the tragic and the sublime in Russian music. During his long sojourn in the Russian Empire (1784-1801), Sarti was active as a multifaceted composer, ambitious music director, and illustrious teacher, incorporating current European trends in local ambience. Prior his arrival, and even during his stay in Russia, Sarti's 35 dramme per musica, seven dramme giocose, nine azioni teatrali and pastorali (not including Singspielen in Danish, compiled during his service in Copenhagen) had been successfully performed on various European stages. His previous positions at the Milan Duomo and earlier at the Venetian Ospedale enabled him to gain expertise and esteem in composing solemn choral compositions for the Roman, Ambrosian, and Aquileian rites [2][3][4][5]. Sarti's last works composed prior his departure from his homeland, Giulio Sabino (Venice: S. Benedetto, 1781) and Idalide (Milan: La Scala, 1783), and especially the new operas written for Catherine II's court, Armida e Rinaldo and Castore e Polluce (St. Petersburg: Hermitage, 1786), impressed with their ethical rigor and refined nobility. These and additional works reinforced Sarti's reputation as a champion of the classical movement that powerfully entered European culture in the wake of Johann Joachim Winckelmann's seminal discoveries about ancient arts, Jacques Louis David and Anton Raphael Mengs' paintings, and operas by Christoph Willibald Gluck, Tommaso Traetta, and Antonio Sacchini [6-11; 12, p. 329-36]. These works, along with Sarti's late dramatic compositions, impacted the marriage of European classicism with local trends and the domestication of tragedy in Russian music. One of these works was Alessandro e Timoteo, which premiered on 6 April, 1782, in Parma, Teatro Ducale [I; II; [13][14][15]. The splendor of this performance -its dazzling vocal virtuosity, orchestral brilliance and outstanding timbral imaginativeness, gorgeous scenery and choreography, and most of all, its moral rigour -highly impressed noble attendees, among them Grandduke Paul Petrovich and his wife Maria Fedorovna, who in fact initiated Sarti's invitation to Russia [14, p. 68; 16, p. 159]. Alessandro e Timoteo is written on a pseudo-Hellenistic source: its librettist Carlo castone della Torre di Rezzonico adopted John Dryden's ode Alexander's Feast: Or, The Power of Music (1697) to suit the conditions of a festa teatrale. In accordance with the traits of this subgenre, Sarti's masterpiece has a one-act structure divided into a dozen scenes. Its subject -the manipulation of a monarch through music -is enacted through a progression of episodes, in which the bard Timoteo arouses various emotions in Alexander the Great. Timoteo sings of Alessandro's mythical descent from Giove, evoking an awareness of his own boldness and audacity. Later he praises the pleasures of wine, encouraging Alessandro to drink, then switches to the sad death of the defeated Persian king Darius, instilling in Alessandro pity for his vanished enemy, and then proceeds with a longing for love. Eventually, Timoteo's song inculcates in Alessandro feelings of anger and vengeance, leading him to burn down the palace in Persepolis. Musical and literary representation of boldness and the pleasures of wine, love, and revenge lean on a robust operatic tradition that was deployed via vocal numbers -arias, ensembles, choruses -and dances. For mercy, the most complex emotional statement, Sarti assembles an arduous moral combat between Timoteo and the emperor in Scenes Reaching his goal, Timoteo now succeeds in turning Alexander to yearning for desire and pleasure: here the recurring motives of oboe alternate with virtually unaccompanied speech, "Quanto è dolce, Alessandro. " Timoteo's graceful rondeau "Bella dea" closes this grandiose display of emotional manipulation. The dramatic intensity and textual-musical complexity of the recitative scenes were one of Sarti's main achievements as a dramatic composer. These scenes usually increase the dramatic tension and lead to climactic point, such as the tumultuous meeting of Giulio Sabino with Epponina and Tito, "Venite o figli, " that precedes his famous aria "Cari figli" in Act 2 Sc. IX of the eponymous opera [III; IV], and the dialogues of Enrico and Idalide, "Sei paga alfin" / Che angustia è questa!" (Act 1 Sc. IX), and Palmoro, Idalide, and Ataliba, "Ah purtroppo il conosco" (Act 2 Sc. XIII) in Idalide [V; VI]. It becomes clear that Sarti unequivocally tends to entrust such moments of catharsis to declamation, whereas vocal pieces -arias and ensembles that enclose such scenes, usually resolve the tension, amaz- ing in their virtuosic display and emotional exaltation. In so doing, Sarti once more exhibits his faithfulness to the classicistic reform movement in opera, resonating with similar famous scenes in operas by Gluck and Traetta, Jomelli, Bianchi, and Sacchini. In his works for the Russian stage, Sarti proceeded to impress the audience with the dramatic power and sublime elevation of the recitative scenes. The main protagonists' dialogue, "Addio / Gia mi abbandoni?" from Armida e Rinaldo [VII], astonishes with music expressive of moral dilemmas, the bitterness of Rinaldo's choice between love and civic obligation. A similar intensity of moral elevation is further reached in Rinaldo and Ubaldo's debate "E abborisce l'aiuto!" (Act 1 Sc. IV and VI). Sarti's reputation as a master of elevated dramatic declamation was reinforced in the famous Greek Scene from Nachal'noe upravlenie Olega -the historical play on a text by no less than Catherine the Great, sensationally premiered on 22 September, 1790. [VIII] Sarti's music for the final act accompanied a meta-theater with the performance of a big scene from Euripides' Alcestis (verses 568-604 in the Russian paraphrase), in which he endeavors to imitate ancient Greek music. [VIII; [17][18][19]. The Greek Scene comprises two expanded sections: a dialogue styled as melodrama and the Ode to Apollo, set as a series of unison male choruses. The through-composed expanded scene in melodrama on a prose text is subdivided into three phases: the first portrays Heracles' arrival at Pherae, with the alternation of his spoken repliques and an accompanied unison chorus ("O! Druzi!" / "Zastanesh' , Iraklii", mm. 1-80); an unnotated dialogue between Heracles and Admetus with the latter's generous invitation to stay in his house ("Raduis'ia Zevsov syn!", mm. 81-130), accompanied by brief orchestral interjections; and the chorus' bewilderment regarding the Admetus' restraining mourning for Alcestis for the sake of noble hospitality ("Chto sie ty' delaesh' , Admet'?", mm. 131-165). The orchestral accompaniment mirrors the emotional agitation and gravity of the text by using syncopated figures, dotted rhythms, and additional concitato devices. In his explanatory note Eclaircissement sur la musique composée pour Oleg, [IX, p. 361-4; X] where Sarti discussed in detail the musical arrangement of this scene, he admitted that the speech should be notated, but since he was composing it far from St. Petersburg, without being aware of the vocal capacities of the actors, he refrained from notating their speech and designed the dialogue as melodrama: 2 "The Greek recitation had to have been notated, for it was accompanied by the lyre during the recitation of the players, and by the tibia during the singing of the Chorus. But I did not venture to notate the declamation of Hercules and Admetus, not knowing, firstly, the range of the voices of those who are to represent them. Secondly, what obstacle could such a novelty create for the interpreters, and how would they be able to carry out such a thing without the presence of the composer? [my empha- A closer observation of this scene indeed reveals Sarti's reluctance to incorporate spoken dialogue: while the outer sections present a coherent musical flow with lofty conversation of an individual with the choir (Heracles in the opening and Admetus in the closing subsections); the sophisticated debate between Heracles and Admetus on the identity of the deceased person sounds like a mechanical alternation of brief spoken phrases with monotonous one-bar orchestral repliques built of arpeggiated triads with their secondary dominants in the recitativo semplice style, but lacking its dramatic power. Some tension is accumulated at the point when Heracles approaches the solution of the Admetus puzzle -the subject of his mourning ("Uzhe umre", m. 100) -when harmonic progression is replaced with one chord (see ex. 2a-b). As with Timoteo's monologue and cavatina discussed above, Sarti's classicistic ambition embraces instrumentation: the declamation alternates with interjections of harp and violin pizzicato in imitation of the lyre, and flute of the tibia. The composer explains the peculiarity of his instrumentation as follows: "I nevertheless accompanied this declamation by short interludes of the harp, intertwined with pizzicato on the violins, in order to represent, as closely as possible, the lyre of the ancients. <…> In this way the actors will not have to worry about the music, assuming that they only declaim it in a loud voice, with the utmost nobility, as the Greeks did in order to impose their presence and be heard from far. It is to be hoped that the two actors will have beautiful voices in a low range, especially that of Hercules" [IX, p. 361-2]. Russian intellectuals prized the music of the Greek Scene for its moral loftiness and ideological significance. The statesman and littérateur Gavrila Derzhavin emphasized the pertinence of an ode, in conjunction with authentic or stylized instrumentation, as a representation of sublime and noble in tragedy: "In ancient times it was accompanied with a plain tune; it was sung with a lyre, psalter, gysli, harp, cetra, and in the newest times as well as with other instruments, strings more than anything else <…>. Nowhere can one sing powerful odes better or more elegantly <…> to the immortal memory of the fatherland's heroes and to the glory of good sovereigns, as in an opera in the theatre. Catherine the Great knew this perfectly. We saw and heard the effect of the heroic musical presentation composed by her in wartime under the name Oleg" [20, p. 516-7, 602]. Sarti's use of melodrama versus unison chorus, apart from being circumstantial, and conditioned by his lack of confidence in setting Russian texts to music, 3 could mirror his susceptibility to a novel and fashionable device that turned into a trend in Europe. During the 1760-70s, melodrama had already become a common component in works on tragic themes, though not necessarily linked to his urge for imitation of the stile recitativo. It had been used both as a technique in entire works, in different versions of Pygmalion by Jean-Jacques Rousseau, Anton Schweitzer, and Georg Anton Benda, or the latter's Medea and Ariadne auf Naxos, as well as an exceptional device in separate scenes in works by Christian Gottlob Neefe, Johann Friedrich Reichardt, Georg Joseph Vogler, Peter Winter, Christian Cannabich, Wolfgang Amadeus Mozart, and others [21, p. [59][60][61][62][63]. In the biblical melodramas, this interest to connect the declamation with the choruses was also in large part inspired by the revival of late seventeenth-century classicistic dramas. Thus, Jean Racine, in the Preface to Esther (revived in Paris in 1805), wrote: "I realized that by working on the plan that had been given to me, I was executing in a way a design that had often passed into my mind, which was to bind, as in the ancient Greek tragedies, the choir and the song with the action, and to use for singing the praises of the true God that part of the chorus that the pagans used to sing the praises of their false deities [my emphasis. Of Italian opera, melodrama (melologo) remained generally atypical, due to the traditional preference for vocal culture as the main vehicle for dramatic and emotional expression, and due to its strong conventions of a genre. At the same time, Italians working abroad in foreign cultures displayed their susceptibility to this new vogue. It is worth mentioning a melologo Werther by the Piemontese composer and violinist Gaetano Pugnani (1731-98), performed in German in Vienna (1796) [XII]. This monumental score adopts Goethe's epistolary novel Die Leiden des jungen Werthers (1774); it is constructed as an alternation of closed orchestral numbers with a narrator's enunciation of letters. Only in some episodes, when the letter describes balls and dancing, or Charlotte playing the piano, do music and text overlap. Another bold example that could serve is Scene VI of Act 2 in Cherubini's Médée (1797), on a libretto by François-Benoît Hoffmann [28; 29, p. 129-31; XIII, p. 256-69]. Similar to Sarti's Greek scene, although on a much subtler level, it presents the sequence of a male unison nuptial chorus "Fils de Bacchus, " followed by Médée's repliques in melodrama (replaced with recitative in the later version). It is introduced at the climactic point of the tragedy, while the solemn strains of the wedding procession underline Médée's painful observation of her beloved husband's marriage to Dircé [29, p. 131]. Such a combination of obbligato recitatives and melodrama in conjunction with unison male choir, characteristic orchestration, and pantomime, although being developed concurrently by French composers, could be considered Sarti's individual penchant, to which our composer did not hesitate to return in the wake of Oleg's triumphal performances. Scene VII from Act 2 of Sarti's penultimate Italian opera -Andromeda, on a libretto by Ferdinando Moretti [XIV-XVI] -depicts the most thrilling moment of the drama. It displays the strenuous combat of Perseo with the sea monster, with the enchained Andromeda on a rock watching with bated breath, and a chorus of the people of Jaffa observing and commenting on the battle from the seashore. This tableaux vivant is a culmination of pantomime (with choreography by Charles Le Picq, decorations by Pietro Gonzaga), melodeclamation, and chorus: Andromeda's aspirations are declaimed to free verses, with brief phrases in declamation and recitative alternate with her measured singing with the chorus, followed in the next scene by the triumphal chorus of priests (see ex. 3). Summarizing these examples, it becomes clear that Sarti's notion of the tragic was conceived as a firm link with classicist ethical and moral rigor. In his dramatic oeuvre Sarti preferred to convey the sublime atmosphere of tragedy through unmeasured text and declamation, whether notated or not, and not via cantilena and closed operatic forms. 5 The location of these portions of declamation at the climactic points of the drama was carefully calculated. He succeeded in attaining great dramatic tension due to the dialogue structure and an alternation with choruses (in unison or in plain chordal texture), and illustrative elements in the orchestra. Often, Sarti's classicistic ambition in such scenes embraces instrumentation: arpeggio of harp or pizzicato of strings as an imitation of a lyre. Closed musical pieces -choruses, dances, or arias -that follow such a scene, sustain or resolve the tension. How did Sarti transmit his vision of the tragic to his disciples and admirers in Russia? From its very beginning Russian national opera fixed its identity as a comedy based on everyday domestic plots, sentimental pastorals, or fairy subjects. Comic opera thus established a robust tradition on Russian soil, featuring dozens of original works and even more numerous arrangements and translations. It was represented by an impressive variety of literary subspecies, including the "drama with voices" Rozana i Lubim (Kerzelli / Nikolaev, 1788); the grotesque buffa Neschastie ot karety (Pashkevich / Kniazhnin, 1779); the folk vaudeville Mel'nik -koldun, obmanshchik i svat (Sokolovsky / Ablesimov, 1779); the satiric comedy Skupoi (Pashkevich / Kniazhnin, 1782); the "comedy with arias and dances" (comédie mêlée d' ariettes) Sokol (Le faucon, Bortniansky / Lafermière, 1786) and Amerikantsy (Fomin / Krylov, 1800); "igrishche nevznachai" ("an unintentional play") Jamshchiki na podstave (Fomin / L'vov, 1787), the political pamphlet Gore-bogatyr' Kosometovich on Catherine II's play with the music by Martín y Soler (1789), and many others. [ At the same time, tragedy in music did not settle down on Russian soil, despite bold precedents by Francesco Domenico Araja, who served the Russian crown during 1735-62. Apart from a dozen Italian serious operas composed by Araja in Russia, his Tsefal i Prokris on Alexander Sumarokov's text (based on Ovid's Metamorphoses, 1755) became the first opera in Russian -a somber tragedy of faithful love disrupted by the intervention of the gods, and lacking the traditional happy ending (il lieto fine) of Italian opera seria. Sumarokov's text, written in metric verses throughout, was set by Araja in the traditional operatic distinction between vocal numbers and recitatives. 6 Three years later, the newly appointed music director of the Russian court Hermann Friedrich Raupakh presented the second tragic opera in Russian, Alkista (on Sumarokov's arrangement of Euripides) -which preceded Gluck's masterpiece by almost a decade and became a significant cultural event. The foremost late-Settecento Italian musicians, during their tenures at the Russian court, presented new serious operas that became the chefs d' oeuvre of the genre: it suffices to mention Ifigenia in Tauride by Baldassare Galuppi (1768) with its civil pathos, or the emotionally and dramatically powerful Antigona by Tommaso Traetta (1772), both on librettos by Marco Coltellini. Vincenzo Manfredini, whose Russian career (tenure 1758-68 and 1798-99) was only moderately successful, staged three serious operas in Russia, Semiramide, L'Olimpiade, and Carlo Magno (during 1760-63). Domenico Cimarosa (1787-91), a genuine composer of sparkling opere buffe, was forced to entertain the Russian court with serious operas on Ferdinando Moretti's texts: La vergine del sole (1788) and La Cleopatra (1789), in which he did not succeed [33] Derzhavin, the spokesman of Russian Enlightenment and neo-classical movement, considered opera the best and solely musical equivalent of the Greek tragedy. It is worth quoting a sizeable excerpt from his treatise "On lyrical poetry" that unequivocally shows this notion: Opera <…> in many respects is nothing else than an imitation of ancient Greek tragedy. <…> regarding to its moral aim, what prevents raising it to that same level of dignity and respect that characterized the Greek Tragedy? <…> It is known that in Athens the theater was a political institution. With it Greece maintained for a long time the righteous feelings of its people, proving its advantage over the barbaric ones. It has been written and sufficiently said that honour is a passion of noble souls; that nothing else can give birth to heroes and govern their hearts than it. <…> Nothing amazes the people's mind <…> as these fascinating spectacles. This is the essence of the aréopage's politics and a true destination of the Opera <…>. Following the ancient custom, for the sake of its marvel, Opera -naturally, the tragic one -draws its content from pagan mythology and ancient and medieval history. Gods, heroes, knights, fairies, sorcerers and magicians are its protagonists <…>. An author of both operas and tragedies could arrange the same content, displaying famous deeds, complicated with contrasting passions, that end with amazing denouements of solemn or plaintive adventures. The Opera writer differs from the tragic [author] solely in that he valiently deviates from the natural track or even abandons it altogether, amazing the spectators with frequent changes, diversity, splendour and marvels, irrespectable of whether it is natural or unnatural, apparent or miraculous. In the tragic genre, the preferable [mood] is the touch of sublime, that reveals itself with the help of strong passions, and not just with words; that avoids sophistication of fable and action, chooses plainness, does not haste too much, knowing that this goes counter to the nature of singing, and more than all this avoids long and tiresome resolution, considering that this is a rational feature that is appropriate for Tragedy but not for Opera, where more feeling should be shown, in the course of which the speeches and deeds are expressed with brief and clear language. Songs or odes for choruses, when appropriate <…>, should be plain, strong, not pompous, and filled with vivid feeling. <…> A singer and composer will be able to borrow from him neither expression nor pleasure. The Opera writer should necessarily know their talents and adjust himself to them, or vice versa, in order to attain a harmony of all its components [20, p. 598-604]. Despite this detailed declaration of an urge for national appropriation of the serious opera, isolated sporadic attempts to create a grand mythological opera on a Russian text produced specimens, but failed to sustain a tradition. It appears that Russian was first and foremost conceived as a literary language, not as a means to express a text musically. Tragedy thus for a long time remained solely a domain of a spoken drama. Apparently, an adjustment of the distinction between the blank and metric verses, essential for European operatic librettos, to the new principles of tonic-syllabic versification of the Russian literary language, proved difficult to achieve. Musically appropriate Russian speech-declamation in serious dramatic genres thus became a substantial challenge to be confronted and solved only in the wake of the Napoleonic epoch and the powerful national movement it evoked. Nevertheless, the territory of tragedy and serious drama did not remain entirely foreign to Russian theater, and found an outlet in the amalgamation of a ramified musical component that included overtures and orchestral entr'actes, choruses, folk songs, and ballet. 7 Russian theater had accepted the European experience with mixed conceptions of word-sound-motion relations, albeit the musical and spoken levels remained unsynchronized. Maria Shcherbakova noticed the generic fluidity of integrating music with drama, defining the situation as a specific generic synthesis in each work [41, p. 13]. In what way did Sarti contribute to the musical appropriation of tragedy in Russia? Among his colleagues and compatriots, Sarti remained in Russia for the most extended period, serving two emperors (1784-86 and 1792-1801) and spending four years in voluntary exile in MaloRossia. Of his fellow compatriots, only the tenures of Araja and Catterino Cavos (1798-1840) proved longer, and their convergence with Russian national culture became equally strong. Unlike other foreigners, Sarti was recognized as mainly a serious composer; he was "entrusted" to marry European classicism, with its emphasis on the representation of the sublime, lofty, and serious, with local music. Sarti's impact on Russian culture has been recognized mostly as a creator of a corpus of oratorios, cantatas, and hymns on Russian and Old Slavic texts, and panegyric music for choirs and orchestra, with unusual timbral effects, artillery (cannon shots), bells chime, and fireworks. In this field, he indeed greatly influenced Russian church and civil compositions. [ Mikhail Ivanov-Boretskii emphasized Sarti's pivotal contribution to Russian music as a highly esteemed pedagogue: "He not only composed music (including works for the Russian church) and performed his operas, which other Italians had been doing, but he raised the craft of performing in opera theater to a very high level and educated a number of talented students. " [44, p. 199]. Sarti trained most of the leading native Russian and Ukrainian composers of the period, and they clearly followed the patterns inculcated in them by the celebrated mentor. Some of these composers, such as Daniil Kashin (1769-1841), Artemii Vedel' (1767-1808), and Peter Turchaninov (1779-1856), were trained by Sarti on an institutional basis, during his service as a Director of the Music Academy in Kremenchug (presently Ukraine) in 1787-91 [45]. The affiliation of Stepan Davydov (1777-1825) with Sarti became most intimate: he lived in his teacher's house as a pensioner for six years, in 1795-1801, during which he was systematically trained and preached to by the aged master). 8 Some others, without being Sarti's formal students, were close to him and took part in his musical activities: this was the case with Osip Kozlovskii (1757-1831), who served as an officer in Potemkin's army in Bessarabia during the Second Russo-Turkish War, but actually replaced Sarti as Potemkin's music director in 1790-91. [47, p. 432-3] Stepan Degtiarev (1766-1813)'s connection with Sarti is most obscure: in 1791, when Sarti was seeking employment with Count Nikolai Sheremetev in Moscow, Degtiarev was, albeit young, already a proved professional: remaining Sheremetev's serf musician, he was in charge of all the intense and multifarious musical activities, including conducting and directing performances of his serf theaters in Kuskovo and later Ostankino, private and public concerts where the Sheremetev orchestra and his Russian Horn band took part, hosting all foreign guest musicians and actors, and composing. 9 Nevertheless, the influence -whether intentional or intuitive -of Sarti's style on Degtiarev's compositions remains undisputable. Evstignei Fomin (1761-1800), without taking lessons from Sarti, became strongly affected by his art due to strong public resonance Sarti provided for his musical activity. Although it proves hard to reconstruct Sarti's teaching method in its entirety, some valuable information can be drawn from his work with another famous student, Luigi Cherubini, during 1778-82 in Florence, Bologna, and Milan [27, p. 1-2; 29, p. . A brief excerpt found among Cherubini's manuscripts in Bibliothèque nationale de France, Paris, seems to represent a summary of certain principles of setting verses in dramatic recitative. It appears to be a draft of Sarti's unfinished pedagogical treatise Melopoea. 10 This concise note is witness to the utmost significance Sarti assigned to the verisimilitude and dramatic potency of musically expressed speech: "The cantilena of the recitative must not be either too grave or too high; but it must imitate the inflection of speech, except in the cases where the sentiment of the discourse requires a more meaningful expression in the melody" [cit. after: 23, p. 331; 51, p. . His statement that "music must imitate the inflection of speech" anticipates the aesthetic credo of mid-nineteenth-century Russian national composers, resonating with the famous claim by Alexander Dargomyzhskii, "I want the musical sound to express the text directly, I want truth!" [53, p. 57-8] (the letter to L. I. Belenitsina, December 9, 1857), and is a prediction of Dargomyzhskii and Musorgskii's pioneering efforts in the realistic formation of melodic recitative with a balance between the lyric and the naturalistic. Sarti's experience with declamation and various styles of musically expressed speech found its continuation in various musical-dramatic subgenres, such as melodrama, ballet-pantomime, oratorio, and incidental music. Melodrama, although occupying quite brief local span of a couple of decades, became an important lieu for the integration of music with drama, perceptively mirroring the general process in the developments of both arts. On the one hand, its proximity to conventions and devices of opera positioned it as a musical-dramatic incarnation of the lofty ideals of classicistic tragedy. On the other, melodrama benefited from the flexibility of the combination of musical and dramatic elements. Fruitful for that stage of Russian national theater, scenographic principles of melodrama strengthened the pictorial qualities of music, turning them into the main conduits of dramatic content. Furthermore, the text of melodrama was free from the distinction between the blank verse (versi sciolti) and metric verse (versi lirici) that was strictly regulated in operatic libretti. All the text in spoken tragedies was written in metric verses, allowing the flexible combination of various styles of enunciation: from speech and declamation on a fixed pitch level to emotional prosody and even arioso-like segments. The first melodrama composed in Russia in the native language was Orfei on Iakov Kniazhnin's text, by the Bolognese composer Federico Torelli (1781, music lost). Evstignei Fomin, ten years later (1791-92), presented a new setting [XVII; XVIII; 41, p. 43-6; 52]. In the wake of Sarti's Oleg, the Kniazhnin-Fomin Orfei became the aesthetic experiment for reviving ancient tragedy, amplified by the faithfulness to the myth, in which Euridice dies in the end and Orfeo is torn apart by furies. It has been generally agreed that since Fomin tragic pathos was recognized in Russian music. Kniazhnin's poetic design of the text approximates an operatic libretto; it is based on the technique of speaking to different addressees, thus enabling sufficient scope for interrupting the dramatic action with static points of emotional reflection. Unlike many well-known Italian eponymous operas, the Kniazhnin's text conveys the climactic moment of the tragedy, showing the serpent biting Euridice, with Orfeo observing from the side. An additional peculiarity of the poetic source was that the dramatist gave precise hints at musical numbers to interrupt the spoken text, evoking an alternation and flexible combination of melodramatic components and proper to opera elements: Fomin adapted the German model of melodrama, already successfully implemented by Sarti in Oleg, with a through-composed design: the text either precedes or follows the music, or is synchronized with it. The orchestral score boasts the typical vocabulary of melodrama, musically depicting the underworld with its paraphernalia, serpent's sting, Euridice's agonizing, and Orfeo's mourning and desperation via conventional rhetorical figures and topoi. Concurrently, musical numbers, such as Orfeo's "wordless arias, " evoked by Kniazhnin's remark in the text "Orfei sings and accompanies himself on the lyre, " with his voice personified by clarinet solo, supported with the pizzicаto of strings (nos. 28 and 38 in the score); three bass choruses by invisible divine messengers ("Imej nadezhdu nesomnenno, ", "Pluton ustavy smerti razrushaet, " and "Eshche tvoi chas, " nos. 20, 36, and 44), the overture, and the dance of furies (no. 55), are closed orchestral pieces. Fomin's operatic approach is further reinforced by the fact that Orfeo's repliques "K prekrasnoi solnechnoi strane" and "Prinudia zhit' , zastaviti stradat'" (nos. 41 and 47) are both designated and designed as accompanied recitatives, although the declamation remained unrhythmicized. A chanting monophonic scansion of the unnamed basses, which emphasizes a strictly rhythmic text, is accompanied with the grave and somber sound of the Russian horn band -this bold detail of orchestration points unequivocally to the underworld scene from Sarti's Castore e Polluce (Act 4 Scenes I-II), with its unprecedented use of the Russian horns in an operatic score 11 Despite the truly stunning success of Fomin's neo-classical experiment, after the two additional fully-fledged melodramas on Alexander Kniazhnin's texts, Andromeda i Persei and Tsirceia i Uliss (1802) with music by Alexei Titov 12 and choreography by Charles Le Picq, the yet to be established tradition of Russian melodrama as an independent work came to a halt. It seems that this fact as well eloquently supports Sarti's notion to employ melodrama as a separate scene or even a brief episode embedded in an opera or literary play. Such a scene was for the first time incorporated in the Russian opera Lesta, Dneprovskaia rusalka by Stepan Davydov, whose successful career was enhanced by Sarti's close patronage. 13 Lesta (St. Petersburg: Kamenny, 1805) was defined as "comique opera in 3 acts, with choruses, ballets, magic, and transformations"; it is the third in a sequence of arrangements of the Austrian Singspiel Das Donauweibchen (Ferdinand Kauer / Karl Friedrich Hensler), translated by Nikolai Krasno- 11 In early September, Fomin had already returned from his stay in Bologna, and attended the Hermitage performance of Castore e Polluce on September 23, 1786. 12 Apart from these melodramas, Titov's music for the dramas Sud Tsaria Solomona (1803), Amur sudia (1805), Polixena (1809), Emmerik Tekeli (1812), Prazdnik Mogola (1823); and his comic operas gained considerable popularity [57, vol. 4, p. 139-44]. 13 Stepan Davydov began to study with Sarti around May 1795, when he graduated from the court chapel. By 1800 he held the position of music director of the Theatrical School. Apart from incidental music, he collaborated with Ivan Val'berkh on ballets, and he composed many sacred concertos, the full four-part liturgy of Iohann Zlatoust pol'skii 14 [XXI; XXII]. A compound scene in Act 2, where Lesta tells the water maidens the story of her tragic fate, has a through-composed structure, commencing with an orches-tral introduction that depicts a vibrating moon landscape. It leads to Lesta's agitated accompanied recitative alternating with the water maidens' chorus, trying to calm her down. Then follows the melodrama section: Lesta's unaccompanied declamation, which sounds quite tedious, alternates with closed musical units, enchanting in their orchestral splendor and bright timbral effects in a natural soundscape (solos of horns, bassoons, clarinets, and English horn; wind duets, mysterious arpeggio of harps and glass harmonica). 15 The scene is encompassed by a lavishly orchestrated instrumental piece: at Lesta's order darkness turns into dawn (see ex. 5). Sarti's impact in attaining a powerful role for musical declamation in conveying serious subjects proves equally apparent in the heroic-patriotic oratorio Minin i Pozharskii, ili Osvobozhdenie Moskvy (1811) by Stepan Degtiarev, on a text by Nikolai Gorchakov [XXIII;48;58]. Unlike the traditional European-style oratorio (and lacking any generic precedent in Russian music), this score abounds in recitative sections displaying various styles of musical declamation. Gorchakov's text, metric throughout, while chiefly dramatic, does contain narrative and meditative portions. Atypically for an oratorio, it lacks the part of a narrator, since Russian audiences were very well acquainted with the events from relatively recent national history. At the same time, the dramatic parts contain many protracted recitative sections: the declamation alternates with brief orchestral motives (sometimes punctuated by one detached chord); emotional speech usually sounds over a background of sustained chords by winds. For moments of dramatic importance or for passages in which extra text expression is required, Degtiarev introduces characteristic melodic motives or distinct rhythmic patterns. Recitatives by Palitzin, "Gotov'tes' grazhdane" (Act 1), Prince Pozharskii's "Chto vizhu ia?", "Suny otechestva", and especially "Gotov'tes' voiny" and "Spodviznikov moikh" (Act 2), and Minin's "Velikii podvig sovershilsia" (Act 3) eloquently illustrate the variety and expression of recitatives that point to Sarti's dramatic style as Degtiarev's source of inspiration. This resemblance becomes even more obvious due to a contrast between the hero's musical speech and the unison choruses by the Russian people "Dadim sebia kak Rossam" or "Emy prilichna slava" (Act 1). Sarti's formidable presence is supported particularly by the instrumentation: a Russian horn band is employed in six numbers, which enhances the solemnity of the music, including the duet of tenors and basses in Act 1 "Velik i vsemogushch Tvorets, " and the trio "Iavi, Vsevyshnii, Pomoshch'!" of Minin, Pozharskii, and Palitzin in Act 2. Gorchakov's evaluation of the music for his text unequivocally points to the Degtiarev's dependence on Sarti's style: "After Bortnianskii, Degtiarev, pupil of the famous Sarti, occupies the first place among the ranks of the best composers of choral music. In his compositions is found an especially refined taste, developed according to the rules of the classical Italian composers: Pergolesi, Spontini, Cherubini, Leo, and so forth" [59, p. 203-4]. Paradoxically, portions of melodrama and musical declamation penetrated even works designated as ballets. 16 Such ballets normally followed the performance of spoken dramas or operas as separate one-act works, or were inserted between the acts of the main play, elaborating on its subject and plot (balli analoghi). This situation instigated the strengthening of the dramatic conception of dance and pantomime that developed tools and devices for conveying serious dramatic ideas and plots and expressing strong human emotions. 17 16 From the 1760s, the Russian court became the center of activity of the foremost European choreographers, Gasparo Angiolini (during 1766-72), Giuseppe Canziani (1779-83 and 1784-92), Francesco Rosetti, Jean-Georges Noverre's disciple Charles Le Picq (1790-99), the first native Russian choreographer Ivan Val'berkh (1803-1819), and a series of brilliant French choreographers and dancers in the nineteenth century. [35, p. 199-232; 60; 61]. Val'bergh alone staged 36 original ballets, 10 restagings of his predecessors' works, and 42 divertissements in operas and dramas. He claimed his ambition "to create a moral ballet" [61, p. 15-6, 19, 21] 17 Music for the ballets was composed in collaboration with the choreographer (Giovanni Paisiello, Carlo Canobbio, Vicente Martín y Soler, and Guillaume Alexis Paris built successful careers as composers of ballets), or was used to existing music by various composers, or sometimes newly composed by the choreographer himself. In autonomous ballet performances, dance and pantomime -the main conveyers of dramatic action -were combined with music and spoken declamation, including melodrama. Thus, the tragic ballet in five acts Didon abandonée by Martín y Soler (1792) contains three big choruses [XXIV]. In Le Jugement de Salomon, Ivan Val'berkh's arrangement of a three-act Parisian melodrama by Louis-Charles Caigniez (1803), text portions of the melodrama (in translation by Alexander Klushin) were declaimed with music by Alexei Titov. Additional Val'berkh spectacles combined dance and pantomime with solo and choral singing, such as the Song of Russian Warriors in Act 1 of his four-act ballet-pantomime Russkie v Germanii, ili sledstvie liubvi k Otechestvu (1813); or a long scene in Act 2 with songs, choruses, and spoken dialogues in his Torzhestvo Rossii, ili Russkie v Parizhe (1814), on music by Catterino Cavos [62, vol. 1, p. 59-62]. The pre-Napoleonic epoch and liberal period of Alexander I's rule became one of the most glittering pages in Russian history, marking the time of Russia's entry into world diplomacy, economics, military power, and cultural developments. An ebullient time of great expectations before the shock of the 1812 French invasion and the Patriotic War resulted in the great victory of the Russian people, it became as well a period of wide-ranging shifts of literary streams and ideological agendas: from the consistent position of Enlightenment ideology and classicism associated with the reign of Catherine the Great, to the sentimentalism that flourished during the brief rule of Paul I, to the powerful thriving of romanticism and the emerging aesthetics of realistic national drama [63, p. 129-55; 64-66]. On the eve of and during the war, dramatic theater moved into an avant-garde and proved most mobile in mirroring the growing patriotic feelings of Russian society. Rafail Zotov testified that "all branches of arts had then just one aim: love for the fatherland. On stage, [the public] did not want to see anything else but Russian, national [plays]. Literary success meant nothing! Everyone thought only of one subject, everyone lived with only one thought, breathed one desire -the greatness and glory of Russia. And never had history provided such events; never had the people so many reasons for enthusiasm; never was the people's enthusiasm for heroism developed with such a strength and in such a splendor, as in this great epoch" [67, p. 23]. Historically oriented patriotism offered a new avenue to Russian authors. Repertory lists attest to a comfortable coexistence of plays based on the typical ancient or biblical subjects, old Slavic epics and history, showing heroic deeds and bold personages against the background of the national and civic environment. Mid-century classicistic dramas by Alexander Sumarokov and Iakov Kniazhnin hold their steady position in repertoire lists. Kniazhnin's disciple Vladislav Ozerov (1769-1816) became the most esteemed Russian dramatist in the first decade of the 19th century. His plays gained tremendous popularity due to the atmosphere of sensibility with which he infused classicistic tragedy. Ozerov created a new tragic style, making literary expression more emotionally laden and significantly increasing its sentimental component. The latter circumstance provided perfect conditions for the intimate integration of music with tragedy, amplifying a heroic-patriotic pathos and embracing a rich emotional component. [41, p. 51-83; 63] 18 . In composing music for spoken tragedies, together with Davydov and Titov, the main figure became Osip Kozlovskii, who in 1799 was appointed inspector of music in the Im-perial Theatres, and a year later became a director, with responsibility for the musical aspects of all productions 19 . Kozlovskii's adoption of tragic topoi in incidental music and apparently in his operas (which have not survived) was prepared in his solemn sacred compositions, mainly the Requiem Mass for the death of the last Polish king, Stanisław-August Poniatovskii (performed in St. Petersburg Catholic church on February 25, 1798). It absorbed many traits of Sarti's works in similar genres, namely their elevated monumentality, sharp dramatic contrasts between choral and solo movements, orchestral magnificence, and special timbral effects, including a Russian horn band (replaced with trombones in the second version), etc. Kozlovskii's music for Fingal -Ozerov's tragedy in three acts, based on James Macpherson's famous epic poem -is considered the best and most carefully elaborated, actually presenting his own musical-scenic rendition of the drama. Scene 2 of the play includes an epic song of the bard Ulline in threefold eight-line metric verses, in alternation with the chorus of the bards (two six-lines in the same eight-syllabic iambic meter), to 138 lines of the text. Kozlovskii's musical rendition of this scene, although relatively concise (160 measures of the score), truly astonishes with imaginative combination of various styles of text-music relations [XXV; XXVI]. After a brief eight-bar orchestral introduction, Ulline's declamation in melodrama, "Umolkni vse v strane podlunnoi" on four lines, alternates with square phrases of the orchestra that includes a harp part (Andante). The rest of his first verse depicts Fingal's battles, switching to an accompanied recitative (Allegro moderato) over a tempestuous orchestral background. The bards enter to "Udarili v medianyi shchit" with plain scansion in la battaglia style (Allegro vivace, triple time). Ulline's second verse, in Allegro furioso ("Vstaet Morvena vozhd' Fingal"), is recitativic throughout, with strong illustrative elements and timbral effects in the orchestra. The second choral verse, starting from the depiction of an arduous fight ("Mel'kaiut, seiutsia"), turns into the macabre scene of the deserted battlefield ("I stala vkrug ego ravnina, " Andante lamentabile, with two solo cellos and strings con sordini). Ulline's last verse ("Padut, i neizbeg sud'bin") loosely combines melodrama over the tremolo of strings, with notated recitative. Bold contrasts of tempo and meter (although the poetic meter remains unchanged), orchestral effects, and a carefully planned tonal structure turn this scene in a true masterpiece (see ex. 6) [47, p. 451-3, 461]. It is considered that in the wake of Fingal's performances, the heroic idea was implemented in Russian music in a stable conjunction with an epic element -a discourse of an ancient bard supported by a stylized or actual accompaniment of a harp. Iurii Fortunatov mentions the vocal cycle by Aliabiev "Baian's songs", the cantata "Three songs of a scald" by Verstovskii, and Baian's Song in his opera Vadim, and of course Glinka's Ruslan. Although melodrama from Fingal was enormously influential, its origins in the Timoteo monologues in Sarti's 1782' festa teatrale becomes rather transparent. Another Kozlovskii melodrama -"Uslyshi greshnykh plach" from Debora -also astonishes with its combination of various styles of prosodic and musical declamation and eloquent rhetorical figures in the orchestra [XXVI]. His music for Esther creates strong dramatic contrasts between accompanied recitatives by an unnamed female personage and monophonic choruses of Israelite women in highly individual pieces, such as the Preghiera (Act 1, no. 5); Cantique "Blazhen narodov" (with elevated recitation supported by sustained chords of solo trumpet, horn, and trombone ad libitum and harp obbligato); the recitative "Bregisia, Tzar'" and a four-part unison chorus "Sam sebia v zare proslavit" (Act 3, nos. [12][13] [XXVII]. The survey of scenes in melodrama in Russian music would be incomplete without mentioning Alexei Verstovskii's Askol' dova mogila, on a play by Mikhail Zagoskin; preceding Glinka's A Life for the Tzar by one year, it had a stunning success (Moscow: Bol'shoi, September 16, 1835) [XXVIII]. The dramatic conception and scenic history of Askol' dova mogila proves quite complicated, resulting from its mixed generic identity -its subject from early Slavic history combines a tragic plot with strong folk and ordinary elements; singing and dances are interwoven with spoken dialogues and melodramas. Six scenes in melodrama include a folk-style scene in which a speech by a humble person alternates with two songs by Torop and an unison chorus of Kievan citizens, "Kak u duba kora" (Sc. II, no. 14). In the finale of the same act, another unison chorus and recitative (Sc. III, no. 24) by Vyshata complement orchestral music and melodrama by a serious unnamed person (Neizvestnyi). At the same time, a series of melodrama numbers with orchestrally supported pantomime impress with their mysterious atmosphere and tragic occurrence of events in the third act: a somber orchestral introduction in B-flat minor leads to the appearance of Rogneda's apparition, Vyshata's speech, the recitative of Rogneda's shadow (supported by a harp solo), and a sound of the invisible chorus (Sc. 4 no. 28). Tragic atmosphere is accumulated with two somber scenes in melodrama, preceding Stemide's death and contrasting with the chorus of pursuers (Sc. VI, nos. 36 and 38). Sarti's decisive impact on the formation of Russian styles of musical declamation and a synthesis of drama and music proves undisputable. Naturally, complementary paths equally contributed to the appropriation of the tragic style in Russian music: these include the intense development of Russian drama, the art of outstanding Russian actors, such as Ivan Dmitrevskii, Peter Plavilshchikov, Emelian Shusherin, and the Sandunov and Yakovlev couples; the impressive innovative scenery by the famous stage designer Pietro Gonzago and his school; the growing trend of orchestral writing to elucidate the content and meaning of the text, and additional important developments. Nevertheless, Russian music remains immensely indebted to the Italian master for the domestication of tragedy that occupied the leading position in the imminent epoch of great historical, political, and social upheavals.
2019-09-15T03:26:39.631Z
2019-03-27T00:00:00.000
{ "year": 2019, "sha1": "3661ea96d91c6c224c8592e2b527bd6a8c447cda", "oa_license": null, "oa_url": "https://artsjournal.spbu.ru/article/download/4857/4269", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "1027ffb3b644e0861ec59fd2d848662f16544db6", "s2fieldsofstudy": [ "Art", "History" ], "extfieldsofstudy": [ "Art" ] }
248892974
pes2o/s2orc
v3-fos-license
Towards faster response against emerging epidemics and prediction of variants of concern The author, the journal, Computers in Biology and Medicine (CBM), and Elsevier Press more generally, played a helpful very early role in responding to COVID-19. Within a few days of the appearance of the “Wuhan Seafood isolate” genome on GenBank, a bioinformatics study was posted by the present author in ResearchGate in January 2020, “Preliminary Bioinformatics Studies on the Design of Synthetic Vaccines and Preventative Peptidomimetic Antagonists against the Wuhan Seafood Market Coronavirus. Possible Importance of the KRSFIEDLLFNKV Motif” DOI: 10.13140/RG.2.2.18275.09761. On February 2nd, 2020, a more thorough analysis was submitted to CBM, e-published on February 26, and formally published in April 2020, at about the same time as the virus named as 2019n-CoV was identified as essentially SARS and renames SARS-COV-2. This was followed by four further papers describing in more detail some previously unreported aspects of the early investigation. The speed of research and writing of the papers was made possible by knowledge-gathering tools. Based on this and earlier experiences with fast responses to emerging epidemics such as HIV and Mad Cow Disease, it is possible to envisage the nature of a speedier response to emerging epidemics and new variants of concern in established epidemics. Background The rapid acquisition of knowledge about a newly emerging disease is crucial to the health of human, animal, and plant populations. The term epidemic from the Greek epi (on) and demos (people), possibly first used by Homer, is believed to have promoted in the medical setting as the title of the treatise attributed to Hippocrates. The interest of Hippocrates in medicine seems likely to have been greatly motivated by knowledge of a plague that killed a quarter of the population of Athens. Epidemiology, the study of epidemics, is typically described as the investigation of factors that determine the frequency and distribution of disease or other health-related conditions within a defined human population during a specified period. That description reflects the founding approach of John Snow (1813-1858) in large part because of his use of maps and statistics in tracing the source of a cholera outbreak in Soho, London, in 1854. More simply expressed, epidemiology is the basic science of public health, but it remains that it is infectious disease and especially "new kinds" that are of concern to national and international authorities [1], that are of interest here. "New kinds" can mean various things. Not all epidemics in humans arise from well-studied pathogens (long known species of viruses, bacteria, etc.). Some arise as new species in the sense of a new identification and classification, but new strains of those that are very familiar, such as influenza A, tuberculosis, and potentially measles, can be serious enough. Coronaviruses illustrate both: all coronaviruses are relatively new to science, being discovered in the 1960s [2,3], but still fairly well studied prior to the rise of SARS in 2003 [4] and COVID-19 in 2019-2020 [5][6][7][8][9]. In contrast, HIV was reasonably described in terms such as "a totally new kind" of infectious disease in the sense of being essentially unknown to modern science when it was identified in the 1980s. It was a major factor that encouraged health journalist Laurie Garrett to worry that is would be a harbinger of other pandemics in her detailed 750-page book, "The coming Plague" [1]. Prior to that, in the 1950s and 1960s, there was great optimism in industrial nations as medical researchers declared "miracle breakthroughs" against infectious disease on what seemed like an almost weekly basis [1]. She reviews several important principles and practices of epidemiology and gives a good account of all actual and potential epidemics known to history up to that time, and potential future threats. However, the word "coronavirus" does not appear in Garrett's extensive index [1]. Because coronaviruses were discovered only around 1966 [2,3] and so a "relatively new kid on the block", especially as regards diseases serious to humans [4,5], the appearance of COVID-19 caught the world somewhat by surprise [6-11]. Aims of this paper This paper is of the "narrative review" and to some extent a "scope review" addressing the aspects of response to emerging infectious diseases that the author considers as potentially important, and which perhaps have previously been somewhat under-discussed, or that the author thinks should be approached in a somewhat different way. In comparison to some recent papers by the author, it is neither intended to be a research paper illustrating a rapid response to an emerging epidemic, nor even a review of the "rapid review" type often used to alert relevant authorities to some new pressing need. It is, however, about such rapid responses, and it does use as examples the three early papers by the author [7][8][9] on what became known as COVID-19, and subsequent papers, as well as referring to earlier efforts in outbreaks of AIDS and other infectious diseases, all of which illustrate, to vary extents, fast responses within the present author's experience. The above is an important distinction to make, for reasons that highlight the problem. During the relatively leisurely, look-back writing of this present paper, which includes accounts of what is to be learned from the past and specifically what computational tools were found useful for the author's fast responses to COVID-19, there have been several more variants of the omicron variant (which is discussed), variants that are increasingly given informal names such as "deltacron" and "stealth omicron". Indeed, during a very few hours of delay in submission of this review, due to a purely technical issue, it became clearer that a new spike was due to a stealth variant BA.2 replacing BA.1 omicron which at the beginning of 2022 had grown to represent some 99% of cases, motivating some rewriting. Shortly prior to gally proof stage, BA.2.12.1, an offshoot of the BA.2 Omicron stealth variant, rose to represent 20%-30% of cases in the UK. But more recently still, there has been the news that companies like Moderna are developing muti-strain vaccines that can hopefully handle all of these. More recent surprises regarding quite different pathogens are mentioned in Section 5. As well as describing some tools that worked well early in the appearance of COV-19, some further proposed algorithms are introduced to illustrate the kinds of developments that could facilitate a fast response to epidemics. Because the new variants can arise quickly and significantly change the nature of the disease, the new algorithms presented as examples should be considered as "templates" for future use. Regarding these algorithms, some clarity may be provided as regards the overall purpose and exposition, because there would appear to be some mixing of qualitative and quantitative or semi-quantitative aspects, to an extent which is perhaps not common. This paper which emphasizes the importance of obtaining both kinds of knowledge to respond rapidly to emerging epidemics. But more importantly and unusually, the approach integrates into a common canonical form the knowledge from (a) mining structured data including such as data in CSV format (comma separated value format, essentially like a large spreadsheet), and DNA, RNA, and protein sequences on GenBank and (b) from mining unstructured data as authoritative natural language text, mainly obtained by "autosurfing" (automated surfing) of the Internet. One may also include as frequently used input (c) the knowledge extracted as above and retained for future use in a Knowledge Representation Store (KRS). The importance is that the integrated elements of knowledge in common format can be used in automated reasoning and prediction, as well as simply read directly by human eye, as informative to the user (but see below). Most often, the canonical form derived from either structured or textual data will be in a semantic triple format < subject expression | relationship expression | object expression >, analogous to subject-verb-object languages like English Here the expressions can contain biosequence data or other information, and the overall structure <…> is the commonest kind of basic element or "tag" of the Q-UEL language, discussed in Theory Section 2.1. It is analogous to an XML "tag" except that it may be associated with probabilistic values and used in the automated reasoning and prediction in a way that takes account of degrees of uncertainty or limited reliability in the source information. Q-UEL can be considered as an extension of XML for probabilistic semantics and Artificial Intelligence, and can be converted to XML, though the result is typically uglier and usually much harder for humans to read directly. Some related comment may also be made on the nature of the informative content of this review. There are also more elaborate Q-UEL tag forms called semantic multiples corresponding to parsed structures of sentences that contain several relationships, verbal, prepositional, comparative, or logical. These tags are usually the initial form of the knowledge extracted from natural language text and are called XTRACT tags. They can either be read directly as sentences by the human eye, decomposed in semantic triples, or used directly by the computer in certain reasoning algorithms. However, while readable by the human eye, they can often appear stilted and somewhat robotic because the sentence (or sentences of subsentence) from which the XTRACT tag is derived is reorganized, i.e., reparsed, to facilitate computer use. Primarily, this is because the graph structure representing the parsed form of a sentence is always converted by natural language processing to a linear graph as much as possible, to facilitate decomposition into the component semantic triples. Consequently, for ease of readability, some such tags have been used as relevant information and re-expressed in the text of the present paper in a more readable English form. Although currently this is still largely a manual process, it is being progressively developed to allow metanalyses, systematic reviews, reports, and technical papers to be written automatically, at least as good initial drafts. At the same time, however, much of the present paper is what one would expect: simply a recollective review written by the author. Epidemics and the roles of computers The above arguably represents continuation of a natural trend in modern epidemiology. Computational epidemiology is a recognized field that uses techniques from mathematics, computer science, geographic information, and public health data to analyze the spread of diseases and the effectiveness of a public health intervention. Effective intervention for new pathogens and variants requires, for the most part, new diagnostics, vaccines, and therapeutic drugs. Developing effective diagnostics and vaccines, along with attempts at containment and other preventative measures, represent primary prevention. A response to emergent infectious disease also means developing effective therapeutic drugs to treat infected people (secondary prevention), and effective means of aiding recovery and diminishing the severity of after-effects (tertiary prevention) which is still somewhat imperfect in the case of so-called "long COVID". To achieve these "preventions". Garrett had noted in "The Coming Plague" that extensive data banks including genomics of pathogens would become important. This view was clearly correct, and today a rate-limiting step is the appearance of well-checked genomic details about new pathogens and strains in publicly accessible data banks, plus alerts to such submissions, as discussed and illustrated below. For most researchers today, that essentially means when the genome (DNA or RNA sequence) of the identified causative agent is deposited in a data base and today that means primarily in GenBank, which is accessible at https://www.ncbi.nlm.nih.gov/nucleotide/. This is an open access, annotated collection of all publicly available nucleotide sequences and their protein translations, produced and maintained by the National Center for Biotechnology Information which is a member of the International Nucleotide Sequence Database Collaboration. Initially, one may then undertake bioinformatics analyses followed by computational chemistry studies of the proteins of the pathogen with a view to design or selection of diagnostics, vaccines, and potential drugs. Except for some important preliminary knowledge-gathering by computer of the kind described later below, relatively little could be done by the present author regarding the disease subsequently called COVID-19 until near-finalized versions of the SARS-COV-2 virus genome became widely available via GenBank [10,11]. Computer and Internet are particularly essential to handle the molecular details and here computational protein science plays an important role. Knowledge of the RNA sequence representing the genome of SARS-COV-2 [10,11] was fundamental to the present author's growing COVID-19 project [7][8][9][12][13][14] and detailed knowledge of that sequence is obviously required for what at the time seemed the surprising use of DNA and RNA vaccines for COVID-19 by the biopharmaceutical industry and major universities (e.g., Refs. [15,16]). The AI-style tools discussed below, developed by the present author and collaborators, were extremely helpful in facilitating a rapid response to COVID-19 without extensive resources, though aided and influenced by the author's earlier experience [17][18][19] in responding to the early stages of outbreaks of AIDS [18], Mad Cow Disease (Bovine Spongiform Encephalitis) [19], as well as flaviviruses, Ebola, and a variety of veterinary diseases [17]. In the case of COVID-19, it was arguably the first known extensive bioinformatics and diagnostic, vaccine, and drug design response, starting in January 2020 in the same few days in which the disease was characterized, i.e., the first steps toward an epidemiological case definition were taken (analyzed in some detail below). The acquisition and application of knowledge remains important at all stages, however. One way to improve early detection is to monitor health-seeking behavior in the form of queries to online search engines such as Google, and more generally the Internet can play an important role. Its history and its layers are often considered as being in five or more stages; see Ref. [17] for discussion in an epidemiological context. The basic Internet connects computers and began in the 1960s when the US Department of Defense awarded contracts as early as the 1960s for packet network systems to connect computers, including the development of the ARPANET, which would become the first network to use the Internet Protocol. World Wide Web 1.0 connects web pages. Berners-Lee wrote a proposal in March 1989 for "a large hypertext database with typed links". World Wide Web 2.0 connects people, using sites that use technology beyond the static pages of earlier Web sites. Essentially, it connects people by facilitating social networking. The term was coined in 1999 by Darcy DiNucci and was popularized by Tim O'Reilly in 2004. World Wide Web 3.0 connects data and knowledge. iIs is normally considered as represented by the Semantic Web, a collaborative movement led by international standards body the World Wide Web Consortium (W3C). It aims at converting the current web, dominated by unstructured and semi-structured documents into a "web of data", particularly by using Resource Description Framework (RDF). World Wide Web 4.0, The Thinking Web, sometimes called 5.0 or higher according to classifications, will organize probabilistic knowledge and reason with it across multiple servers and help make decisions. Particularly when rendered capable of handling uncertainty and probability, it considered by the present author and collaborators as pressingly important for a variety of applications in medicine and the Q-UEL language [20][21][22][23][24][25][26][27][28][29][30][31][32][33][34] and, as an extension to that to epidemiological use cases, it was used in the early response to COVID-19 as described in this paper. Although for any emerging epidemic the bioinformatics tools used locally and on the Internet are important for analysis of pathogens at the level of DNA and RNA sequences, the proteins for which they code quickly become a major part of the investigation. Up till relatively recently, there was some extent of a gap between the knowledge of the molecular details of a pathogen and the understanding of molecular sciences that would be desirable to make more finely tuned, sophisticated responses. Until the 1980s and even late 1990s, the molecular details used for subtypes, strains and variants were primarily about what antibodies could be raised by, and interact with, the surface proteins of the pathogen. This is especially famous publicly for the influenza A virus where the practice and the notation continues today: recall that the numbers in "Spanish flu" H1N1 (1918)(1919), "Asian flu" H2N2 (1957)(1958), "Hong Kong flu" H3N2, and "Bird flu" H5N1 (1997), H7N9 (2013), H5N6 (2014), and H5N8 (2016) all relate to the immunological typing of influenza A into subtypes using the immunological properties of external (spike-like) proteins hemagglutinin H and neuraminidase N, the number increasing with a new immunological sub-type (most of these still circulate extensively today). Although today the major step is determining the genome of a pathogen, attack by pathogens and defense against them, both naturally by the body and aided by science, is primarily a war between proteins, i.e., the proteins of the pathogen and the proteins of the host often aided by the proteins or peptides (in effect, small parts of proteins) contained in, or implied by, or used in making, a vaccine or diagnostic. The well-known success of DNA and RNA vaccines, used for the first time "outside the lab" in the COVID-19 pandemic, does not get round the fact that they use the cells of the vaccinated person to produce the required proteins, and that knowledge of the details of protein structure and function remains important. Not least, ongoing acquisition of knowledge about new pathogen variants and how they affect the function of pathogen proteins and the response of the host, is crucial, else vaccines (and perhaps also diagnostics) found effective in a first wave of an epidemic might be rendered useless in subsequent waves. Vaccines composed of pathogen proteins remain important or are emerging as potentially important tools in the modern armory, and new classes of vaccine based on chemosynthetic or cloned peptide copies of key parts of them continue to show promise as a new class of weapon in the laboratory, already proven in animal husbandry (as discussed below). The pathogen genome is, of course, not the only genome of interest. Bioinformatics tools both locally and on the Internet are also important for a fast response because of the complexity of the human genome and human molecular biology required for understanding the host response. To add to that complexity, there is an aspect of personalized medicine (including simulations) in the interactions between the proteins of pathogen and host. This is because there are not only differences in the immunological state of individuals due to past exposure and the actions of some therapeutic agents on the immune system, but also all the important proteins of the human host response and host receptors to which the virus binds and enters the cell are at least to some degree polymorphic. That is, because of genomic differences, they vary from human individual to individual. For example, host cell receptors by which a virus can enter a cell can vary: notably in gene encoding CCR5, which acts as a co-receptor for HIV. Importantly, HLA (Human Leucocyte Antigen) proteins of the major histocompatibility complex (MHC), can vary, and variations in both types of proteins often lead to marked differences between individuals in susceptibility and severity to particular infectious diseases. 1.4. First responses: use of early pre-genomic knowledge in an emerging epidemic As also described in this review, knowledge gathering tools are very valuable for gathering the appropriate data as input to bioinformatics and computational protein science, and to facilitate automatic use of those tools, but some comments should also be made on their importance in the very earliest stages of an emerging epidemic, even before the pathogen genome is available. This may be described as a more predominantly qualitative, preparative stage and certainly as a prebioinformatics stage, but it is important because the severity of the problem, as the prevalence and geographical distribution of a disease and comorbidities or fatalities from it, can evidently escalate very rapidly. Initially, there may be little knowledge of the disease, or even of the nature of the pathogen (and almost certainly not a detailed map of the pathogen genome), putting the science involved in a somewhat similar position to that available up to the 19th century. The first warning will be a sudden increase in the incidence and prevalence of a disease with characteristic symptoms and a significantly increased morbidity and/or fatality rate, whence it is likely to a previously unknown species or a new variant of a known one. There may, nonetheless, be some prior probabilities as degrees of belief. Medical anthropologist Edward Hudson stated that sexually transmitted syphilis is a disease of "advanced urbanization" whereas yaws (caused by a bacterium that enters skin abrasions and gives rise to small, crusted lesions which may develop into serious deep ulcers) was "a disease of village and the unsophisticated" [1]. It could still be asked as to why early knowledge is useful, beyond being, of course, a valuable step toward characterizing the pathogen and determine its genome. While overpopulation and modern travel means that an infectious disease today is no longer confined in space and time [1], the rise of communication technology, and especially the Internet, also means that knowledge is similarly no longer limited. One can imagine that many pandemics (global epidemics) in history could have been averted (primary prevention) or attenuated had the Internet been available in their time, even if vaccines, relevant effective pharmaceutical drugs, and even a significant knowledge of microbiology, were still absent. Although the first definitive appearance of the Black Death was in Crimea in 1347, and cholera was characterized in outbreaks in England and Italy, and Jessore India in the mid to late 19th century, they were known to ancient physicians at specific locations in Asia long before that. Once relevant knowledge is available, defensive action of a simple "low tech" nature is sometimes sufficient. Many lives may have been saved from cholera, that kills by dehydration, by the availability of large amounts of water and salt. When John Snow identified cholera as water-born, the only technology required was that for the removal of the handle of the pump to the infected supply. It is possible that in ancient India an early form of homeopathic medicine was used to expose people to low safe concentrations of cholera bacillus to confer immunity. As far as is known, vaccination per se originated in 1796, when Edward Jenner took fluid from a cowpox blister and scratched it into the skin of an eight-year-old boy. A single blister arose on the spot, but the boy soon recovered. Later, Jenner inoculated him with smallpox matter, and no disease developed. See Ref. [1] for historical discussions on these topics. Even before obtaining molecular details, there is characteristically consideration of therapeutic chemical substances for using approaches to previous diseases with similar symptoms, recently by repurposing approved drugs and historically by using herbal remedies, in the hope that they may apply. Of course, use of herbal remedies persists strongly today, in all nations. Searching on Covid herbal (without quotes) got 344,000,000 Google hits at the time of writing, and many of the visible hits were clearly as the query intended. Contagion, the 2011 fictional movie about a deadly worldwide virus outbreak, captured the features of early stages of past epidemics by being partly focused on a herbal treatment forsythia that was scientifically ill-conceived and promoted by blogging within the plot of the story, but there is in fact an ancient Chinese herbal remedy believed to have microbial and antiinflammatory activity based on that genus of plant of the same name. At the very least, previously used compounds have been judged safe for most patients by clinical trials, or by centuries of informal clinical trials (by abundant use) in the case of herbal remedies. Sometimes, as for COVID-19 and known herbal extracts such as emodin, ursanoic acids, and steroid-like compounds, similar in appearance to these in plants of genus Forsythia, make some scientific sense by modern criteria [8,9]. The chemical formulae of many such plant compounds, earn the nickname "dihydroxy-chicken-wire", a humorous behind-the scenes reference by pharmaceutical chemists to the flexible steel netting with hexagonal holes used to contain chickens and other small animals. However, they often have the above beneficial properties to varying extents, and while it could be said that herbal remedies have not typically proven as effective as novel therapeutic agents developed by science to combat infectious disease, it is also to be remembered that many of the therapeutic agents available today have been derived from or inspired by natural products, at very least as starting points for drug discovery and development. New diseases versus new variants of concern Modern researchers have a much larger toolbox for research and development of preventative measures, but as the discussion above implied, in responding to the earliest stages of epidemics, they are in a similar position to their historical predecessors. Early waves of epidemics may involve previously unknown species of pathogens, and subsequent new variants of the identified pathogen can be significantly different in ease and mode of infection, symptoms, and severity. Researchers are then shooting at targets that are unclear or have changed to become unclear respectively, and they must rely on clues from what appeared to work in apparently similar cases before. Hence, the question "How new is new?" is an important one. COVID-19 is currently the obvious example of both kinds of challenge. It is not of known historical concern and the word "coronavirus" did not appear in Garrett's extensive index [1]. Coronaviruses were first considered as potentially affecting human health around 1965-1966 [2][3][4] and were specifically described as "a new virus" found in the respiratory tracts of humans with common colds [2,3], and officially declared as of a new genus "coronaviridae" in the mid-1970s. They were subsequently known to be responsible for roughly 20%-30% of common colds, but they were only considered a source of potentially serious disease for adults and a global threat for the first time in the SARS outbreak in 2002 [4]. If, after an epidemic, a disease is established in a population (meaning that continues for a significant period or persists at some endemic level), then it provides a reservoir that can form the basis of a new epidemic still due to that same species of pathogen. Technically, an epidemic is an increase in the incidence of disease, etc. in a defined human population that is clearly in excess of that which was expected during a specified period, i.e., above the normal endemic level of disease in an area. It applies even if the endemic level is low or zero, as seemed the case for HIV. A common example of increase above a significant, though low, endemic level is that in Escambia County Florida, the average number of early syphilis cases reported per quarter increased from 15 in 1987 to 75 in 1990; it was of sufficient concern to the Florida Department of Health and Rehabilitative Services and CDC to investigate used patient interview records to compare characteristics of patients with syphilis diagnosed before and during the increase of cases, and similar situations have occurred several times in many different locations [1]. Such an increase can be due to changes in population or behavioral changes in a human population but can of course be due to changes in the pathogens themselves. In advanced pathogenic organisms, dangerous variations can appear due to sexual (or sex-like) recombination. This includes spreading of drug-resistance plasmids in bacteria, and in the case of viruses with segmented or modular domains in the genome such as influenza, new strains appear particularly rapidly by reassortment of the viral RNA between different variants in the same host cell. The latter has some of the character of crossing over in sexual reproduction, which facilitates efficient evolution by tending to preserve, as interchangeable building blocks, the genes or parts of genes responsible for proteins or their subdomains respectively (i.e., parts of larger proteins that were originally separate smaller proteins). Coronaviruses possess no extensive degree of such genomic segmentation to facilitate reassortment, though the appearance of "deltacron" COVID-19, a hybrid of the delta and omicron variants, indicates that reassortment like that in influenza can still occur, even it is with a lower probability. But even if evolution of a virus depends only on accepted mutations, that can be an efficient means of generating new variants if the pathogen has spread rapidly and represents a very large reservoir, as was the case with HIV. In a LinkedIn post on the April 13, 2020 that led to discussion on other sites, the present author calculated that based on global prevalence at time, and viral genome copies per host organism, that there could be at least 10 21 SARS-COV-2 virus particles in the world, possibly 10 26 bits or more of parallel viral computational power allocated to working by natural selection to produce variants that are better fitted to reproduce. Be that as it may, the number is inevitably astronomic. Not all variations have serious consequences, but some do and thrive due to natural selective pressure. COVID-19 alpha was becoming dominant around the beginning of January 2021, delta around May 2021, and omicron in early January 2022. To consider the potential seriousness of accepted mutations in the pathogen proteins, the term "variant of concern" (VOC) is obviously a useful general concept. The term has primarily arisen in widespread use in connection with SARS-CoV-2. It is mainly used for variants of SARS-COV-2 where mutations in the spike protein receptor binding domain (RBD) substantially increase binding affinity (e.g., N501Y) in RBD-hACE2 complex (genetic data), while also being linked to rapid spread in human populations. Several national and international health organizations such as the Centers for Disease Control and Prevention (CDC) (US), Public Health England (PHE), the COVID-19 Genomics UK Consortium for the UK, and the Canadian COVID Genomics Network (CanCOGeN) use many or all of the following criteria to assess what is meant by "concern". These are, increased transmissibility, morbidity, or risk of "long COVID", ability to evade diagnostic tests, decreased susceptibility to neutralizing antibodies and/or antiviral drugs, ability to cause reinfection and/or infection of vaccinated persons, increased risk of serious conditions such as multisystem inflammatory syndrome or long-haul COVID, or increased affinity for particular groups (e.g. children, elderly, or immunocompromised patients). Variants that meet one or a few of these criteria may be labeled "variants of interest" or "variants under investigation" ('VUI'), pending further research. In the case of variants, once knowing the genome, there can be a level of prediction even before the new variant has impact, and in principle there is the opportunity to predict what the consequences might be of certain changes to the genome even before they happen. At present, very cases of the latter occur, insight comes as hindsight, and such predictions before-the-fact remain essentially "in principle". Some useful directions, however, are described here. For VUIs, the concerns typically relate to changes in amino acid residues in host receptor binding sites and certain regions on the outside of surface proteins of the pathogen (the spike glycoprotein in the SARS-COV-2 case) that serve B-epitopes. These raise an antibody response and are the regions which bind to the antibodies so raised. Being at the surface and often in flexible loops, these change fairly readily, i.e., there is a higher probability of accepted mutations. In contrast, T-epitopes responsible for immune system memory in response to infection or vaccination can be buried inside the protein and exposed by proteolysis; because the amino acids have to fit in appropriately in the manner of a three-dimensional jigsaw, T-epitopes tend to change more slowly. Nonetheless, T-epitopes are not confined to such locations and, in the author's experience, they can often overlap with B-epitopes. That said, a sufficient number of T-epitopes will enable successful vaccination. Here, one functional distinction between B and T epitopes is seen in the omicron variant of COVID-19, in which the spike glycoprotein B-epitopes have changed so that vaccination no longer confers much resistance to infection, but the T-epitopes being often in proteins not at the surface, are largely unchanged by natural evolution, and the cellular response is still efficacious in reducing severity of the disease. In practice, predictions as B and T-epitopes may be boosted, or more correctly stated, ranked, by adding a score based on the appearance of certain amino acid residues in known B and T epitopes, but also based even more pragmatically based on past efficacy of such epitopes when synthesized, linked to a carrier protein, and tested in laboratory animals. For example, the author has often found the presence of histidine and tyrosine to be helpful in obtaining good response to raising antibodies. The size of a potential epitope, i.e., number of residues in it, can also be important for vaccine design, particularly for T-epitopes. The peptides are presented on the surface of an antigen-presenting cell, bound to major histocompatibility complex (MHC) molecules and certain cells in human hosts are specialized to present longer MHC class II peptides of 13-17 residues, while nucleated somatic cells mostly present shorter MHC class I peptides of 8-11 residues. The example of the rise of COVID-19 in more detail On December 31, 2019, the World Health Organization (WHO) was informed of a cluster of cases of pneumonia of unknown cause in Wuhan City, Hubei Province, China (e.g., Ref. [5]). The early official responses could neither reasonably be described as rapid nor as particularly well organized [6]. Because of an interest in emerging epidemics, the present author became aware of these cases very early in January but the news at that time was patchy and unclear; there were five critical days from December 30, 2019 to 3rd January in which the picture solidified [6], but the information was little more than that there was a potentially serious problem emerging, primarily and simply that this was not normal pneumonia. On January 4, 2020, the WHO reported on social media that there was a cluster of pneumonia cases, with no deaths, in Wuhan, Hubei province. On January 9, 2020, it was officially announced that a novel coronavirus had been identified in samples obtained from the Wuhan pneumonia cases, and around 11th January Chinese state media were reporting the first known death definitely caused by the virus. A diagnostic test was more-or-less publicly available by 13 January, on a very limited basis. Human-to-human transmission, a key step in the rise of a zoonotic disease (i.e., of animal origin) was only publicly confirmed by the 20th January [5,6]. All these were triggers that initiated interest in the present author, but preliminary bioinformatics studies of the genome could only begin around January 23, 2020, when Chinese researchers in association with the University of Sydney posted the updated the genome sequence considered as reasonably correct and complete as GenBank entry MN908947.3. That original entry stated in a comment that this sequence version replaced MN908947.2 on Jan 17, 2020, and the current entry at time of writing with minor revision is dated March 18, 2020 [7]. The GenBank entry at the time that it was used most extensively by the present author began as follows. Of course, there were several fast responses by well-resourced laboratories such as Oxford University based on knowing the SARS-COV-2 genome, and these were soon directed at productive vaccines, as discussed below in Section 1.7. The present author responded primarily to the above GenBank entry with preliminary bioinformatics studies focusing on the spike glycoprotein, and a preprint was posted on ResearchGate on 30th January [7], emphasizing likely bat origins, important conserved regions, and diagnostic, vaccine and peptidomimetic design. Some aspects described, such as the essential SARS-like nature and immediate bat origins, could be considered controversial at the time, but less so later. On the same day, 30th January, the WHO declared a Public Health Emergency of International Concern. Recall that there are three phases in any fast response in such cases, being (1) awareness that there is a possible new infection disease that could become an epidemic, (2) isolating, identifying, and characterizing the pathogen responsible, and (3) obtaining access to a reasonably reliable genome sequence, at least the genes for important surface proteins (the spike glycoprotein in the case of the above virus). This is reflected in the title of the above preprint [7], which therefore referred to the Wuhan Seafood Market Coronavirus, and references were made in the text to the Wuhan seafood market isolate. In principle, the start of the author's project in its bioinformatics phase could have started several days before the 23rd: it is likely that earlier less well validated versions of the genome such as that submitted on the January 5, 2020 by the Shanghai Public Health Clinical Center & School of Public Health, Fudan University, Shanghai, China, could have been very valuable for bioinformatics analysis, but sequence errors can lead to false trails and wasted time, and totally incorrect sequence due sampling errors or contamination by other viruses or organisms are not unknown, leading to withdrawal from databases. Indeed, later news articles asserted that early sequences from early outbreaks in Wuhan were removed from a US government database by the scientists who deposited them, possibly due to similar concerns. As noted above, even when given the news that the virus was a coronavirus, relatively little more could be done by public researchers until the sequence of the viral genome was made widely accessible and verified. As also noted above, that in practice means, primarily, GenBank. Once accessible, the genomic sequence of a pathogen immediately makes possible bioinformatics studies that can relate the causative agent of a new epidemic to lessons learned from any already known related pathogens, lead on to protein science, then wet laboratory biotechnology, and then more rational design of diagnostics, vaccine, and pharmaceutical agents. The preprint [7] was followed by two fuller reviewed papers by the present author in February and June 2020 [8,9]. All these papers highlighted the risk of emergent new strains and concentrated on the conserved segments of the spike glycoprotein that were at least partly exposed and that must be important to the infection by, and replication and survival of, the virus. Also, on 23rd January, a scientific preprint from the Wuhan institute of Virology was posted on Biorxiv, and e-published in Nature on the 3rd February [10], announcing that a bat virus with 96% similarity had been sequenced in a Yunnan cave in 2013. While earlier genome versions and even those removed are very likely have been important, it is also probably fair to say that the potential seriousness of the outbreak was not fully clear until around the time of the appearance of version MN908947.3. The Chinese researchers with the University of Sydney gave a fuller account of MN908947.3, describing the genes and the associations with other, in February 2020 [11]. It was not until much later, at least by the standards of these timescales, that on February 11, 2020, the WHO named the syndrome caused by this novel coronavirus COVID-19 (Coronavirus Disease 2019), and it was only formally classified as a pandemic as late as March 11, 2020. The severity is, of course, now very clear. On the day of first writing this sentence (January 2, 2022), there had been accumulatively some 300 million known cases of COVID-19 worldwide so far, and 5.5 million known deaths from it, with just under 4,000 reported for January 1st , 2022 alone. Rewriting on March 3, 2022, there were some 441 million cases and just under 6 million deaths. On close-to-final writing on March 18, 2022, there have been 467 million cases and just over 6 million deaths, and 6.28 million deaths at the time of final typesetting corrections. Probably all these numbers are gross underestimates, due to frequent mild or absent symptoms, misdiagnosis, and underreporting. Many journalists have speculated, perhaps not unreasonably, that such global numbers as quoted above could be as much as twice as high, or even three times as high. With the rise of the omicron variant, infection levels were accelerating globally but in many countries such as the US and UK, that peak has now passed. Either way, the situation in January 1st 2022 showed a hugely significant difference from the situation on January 1st 2020. Early indication of a high fatality rate, essentially the probability of dying if one has the disease, is an important alert. In 1997, a few hundred people became infected with the avian A/H5N1 flu virus in Hong Kong and 18 people were hospitalized. Six of the hospitalized persons died. The rise of COVID-19 was somewhat more alarming. When the present author began the COVID-19 project, it was prior to the WHO announcement and there was just one death definitely due to the virus described in the news, and possibly two. But subsequently 17 deaths were reported as occurring by 23rd January, and in view of the small number of cases, this was sufficient to worry that there was a new disease or variant with a high fatality rate. The state of knowledge at around the start of the study was that it was probably a form of severe acute respiratory syndrome, SARS, but not necessarily sufficiently close to be called SARS. The number of deaths as described at the beginning of this Section seemed consistent with the concern, nonetheless, because the fatality rate for the earlier SARS appeared to be around 9-10%, and the related Middle Eastern respiratory syndrome, MERS, was even higher at 34-35% (though this may be misleading as many mild cases may not have been reported). While the preliminary analyses indeed indicated that it was SARS [7,8] some authorities were then declaring it was not. Authorities may thus have possibly been making a fine distinction to alleviate public concern, although for researchers outside the innermost circles wishing to gather knowledge, that was possibly less than helpful. It is certainly now considered a SARS-like coronavirus sufficiently enough to justify the final name of SARS-COV-2. Recall that the virus in the earliest paper by the present author was referred to as the Wuhan Seafood Isolate coronavirus or just the Wuhan Seafood Isolate, and later 2019n-Cov: it was not until 11th February that it was named COVID-19 by the WHO, followed by the Coronavirus Study Group (CSG) of the International Committee on Taxonomy of Viruses who named the name of the causative agent as SARS-COV-2 (severe acute respiratory syndrome coronavirus 2) [5]. The author's early papers analyzing the above GenBank entry indicate the simplicity and power of bioinformatics, because as well as emphasizing that it was essentially SARS and that previous SARS studies were likely to be relevant, another interesting observation of the above papers were that "All the top matches are bat host species" [8]. That is probably the most popular choice of immediate host now, at the time of writing this review, although that remained controversial for some time. That is also to be seen in the light of understanding that the immediate host for SARS was known to be the civet, although a bat was subsequently determined to have been the source of civet infection. In July 2020, the present author also predicted that like influenza many coronaviruses and the spike glycoproteins contained hemagglutinin-like binding sites to bind host cell sialic acid, which was contrary to opinion at the time, but did not contain a neuraminidase (sialidase) or similar esterase to reverse the binding, suggesting an increased risk of hemagglutination between red cells, and between red cells and capillary wall, and hence increased risk of hemolytic anemia and multiple thrombosis and kidney damage. Two later papers in the series focused on conserved regions and variations in the coronavirus proteins, one highlighting highly conserved sequence motif in Nsp3 of SARS-CoV-2 as a potential therapeutic target, and one indicating how the highly conserved KRSFIEDLLFNKV motif, a target Achilles Heel of the virus associated with host cell entry predicted in the earlier papers [7][8][9], becomes more extensively exposed to antibody when antibodies bind elsewhere [14]. The biopharmaceutical response to COVID-19 The early responses by well-resourced organizations such as Oxford University was also rapid, but it might have been faster still with more immediate funding from government agencies. However, at the time the picture was unclear, the true extent of global danger was not obvious, and perhaps to many administrators it all seemed more academic. Another possible reason was that, traditionally, vaccines have been based on killed or attenuated viruses, without knowing the genome or any other molecular details, but times have changed and perhaps proposals sounded futuristic: responses based on knowing the genome represented almost all the academic and industry responses to COVID-19, and the use of RNA and DNA vaccines had never been tried before in a large-scale response to an epidemic. An awareness by research scientists of the true state of the art was important. In an interview in March 2020 with Bernarda Tundzhay, a health journalist, the present author (BR) was not as skeptical about the speed of development as other experts [15], but keeping in mind that one of the older but faster vaccines to develop, MMR, took 4 years to develop, that there is still no approved successful vaccine against HIV after more than 40 years, and that there is little long-term immunity to the common cold that is a coronavirus infection in 20%-30% of cases, so it was important not to raise false hopes. Quoting the present author the journalist stated that "Usually, it takes about one year to initially test vaccine or antiviral products before moving them into clinical trials… However, during the ongoing Covid-19 global pandemic, there is an obvious need for a quicker turnaround, but a rushed vaccine or antiviral of any kind could cause safety issues, such as where an autoimmune reaction is raised against a patient's own proteins, he added" [15]. The estimate of about a year was considered an optimistic fastest possible limit largely based on the time period for development of some animal vaccines (which typically undergo less stringent tests) but considering that the first steps of rollouts took place in mid-December 2020 and main rollouts in 2021, it was not a bad guess. The fast responses in vaccine development include the Oxford-AstraZenca DNA vaccine effort, said to have taken 11 months. The mRNA vaccine from Pfizer received FDA approval on December 11, 2020 and the Oxford-AstraZeneca vaccine shortly after. For many industrial nations, the one-year estimate was more-or-less "spot-on". AstraZeneca received a conditional marketing authorization valid throughout the European Union on January 29, 2021. Research and development on these vaccines and others such as the Moderna vaccine are considered by many experts and the press as unexpectedly rapid (e.g. Ref. [16]). Many attribute this to the RNA/DNA nature of the vaccine constructs, which were somewhat unexpected. Admittedly, the constructs were ready as a general method of combatting new pathogens, using a "cartridge" or "plug'n'play" approach. The Oxford construct used a common cold virus that infected chimpanzees, ChA-dOx1 (Chimpanzee Adenovirus Oxford 1), programming the DNA to encode the spike protein. However, "ready" meant that successes had been largely confined to the laboratory and a few small trials. According to the Oxford group, prior to Covid, 330 people had been given ChA-dOx1 vaccines for a variety of diseases ranging from flu to Zika virus, chikungunya, and prostate cancer [16]. There was one unstated prediction or presumption by the present author that was not correct. The use of DNA and RNA-based vaccines and particularly their rapid approval by the FDA etc. for human use were somewhat unexpected. The present author had focused on peptide-based vaccines [7,8,14]. They were still considered state-of-the art and new in the sense that use was still largely confined to veterinary medicine, as in Foot and Mouth Disease. The peptide approach still requires knowledge of the genome or the details of proteins generated from it, and the present author focused on pathogen protein analogues as prepared on a peptide synthesizer, in which he had most experience because they were considered for many years as the most promising new generation of "cartridge" or "plug n 'play" approaches to vaccines both in terms of fast response and relative safety compared with traditional vaccines made from killed or attenuated viruses [17]. By focusing on the parts protein sequences of the pathogen that appear to matter to a B-cell and T-cell response and ignoring only those details concerning unnecessary biological features that might lead to adverse effects, the peptide approach remains attractive. But also, using the DNA or RNA approach in no way diminishes the huge benefits of the computational and knowledge-based approach, the use of bioinformatics, the appreciation of the fundamental features required for diagnostics and vaccine, and the identification of, and response to, variants of concern, as follows. Extracting knowledge The mathematical theory of knowledge underlying the Q-UEL language and related inference and prediction methods, as used by the author in relation to the rise of COVID-19, may appear unfamiliar. Consequently, it should be emphasized that it is presented here primarily as a way of pulling together formal discussion of the kind of information and computational tools required. The fight against emerging disease has priority over any personal opinions regarding mathematical elegance, so it is fortunate that there are many beneficial features of the approach used that could be reimplemented in somewhat different, and perhaps more familiar, ways. The focus is on what the tools need to do, of which these are but examples. In the author's opinion, however, the mathematical basis has considerable advantages, and it is arguably the natural and conservative solution, at least in the sense that it builds on a highly successful standard in physics that goes back to the 1930s. Descriptions of the Q-UEL language and uses of it are provided in many published papers: see especially Refs [20][21][22][23][24][25][26][27][28][29][30][31][32][33][34][35][36][37]. Examples of literature sources from which relevant qualitative knowledge (discussed later below) can be extracted are Refs [38][39][40][41]. The approach in the case of making quantitative, probabilistic predictions from knowledge associated with probabilities is particularly emphasized in Refs. [24,[29][30][31], and the basic theory of the inference method that underlies such predictions, called the Hyperbolic Dirac Net (HDN), is give in Refs. [42][43][44][45]. One way to introduce the ideas more briefly in the context of emerging diseases that are relatively new to science is as follows. Possibly the most general theoretical statement that can be made about knowledge gathering methods is that if X is something new and hitherto unknown, and Y is something with similar features that is well known and known to have properties Z, then it is certainly worth considering, subject to further scientific investigation, that X has properties Z. Recall comments relevant to the idea of "similar features" such as symptoms for drug repurposing and herbal remedies to treat emerging epidemics including COVID-19 (Section 1.4). This natural and obvious approach has been well-known in a more structured form to pharmaceutical chemists for drug discovery purposes and is useful for similar pharmaceutical reasons even when X and Y are large structures such as pathogens or pathogen proteins. An important step in the present author's COVID-19 project was the observation that the amino acid residue sequence of the spike protein of the pathogen in the Wuhan seafood market isolate was closely related to that of the coronavirus responsible for earlier SARS. In the present cases of interest, knowledge k regarding X, Y, and Z, say as k(X), k(Y), k(Z) and important joint knowledge k(X; Y) and k(Y; Z) from which k(X; Z) is deduced on the above assumption, can take diverse initial forms. Recall (Section 1.2) that the main classes are (a) structured data including of spreadsheets, tables, and last data types such as DNA, RNA, and protein sequences, (b) unstructured data such as authoritative natural language text on webpages, and (c) knowledge repository stores (KRS) containing elements of knowledge extracted from both of the above sources into canonical form that both computers and humans can easily read and use to draw inference, such as in the case of the Q-UEL language discussed below. One might say that there is some functional model for inference f such that one can write f(k(X; Y), k(Y; Z)) → k(X; Z). In the case of structured sources it is perhaps particularly useful to see the k in k(X; Y) and k(Y; Z) as association constants (with a natural logarithm as Fano mutual information), from which k(X; Z) is deduced using certain interdependency and independency assumptions. Such associations are, for the above and many other purposes, quantified wherever possible as K(A; B) = P(A|B)/P(A) = P(B|A)/P(B) and conditional probabilities P(A|B) and P(B|A). While K(A; B) is symmetrical, i.e. K(A; B) = K(B; A), the probability dual {P(A|B), P(B|A)} discussed below is, in a sense, a kind of dualized or directionalized K(A; B). In general, P(A|B) is not equal to P(B|A): they are mutually related by Bayes' Rule that can be expressed as P(A|B)P(B) = P(B|A)P(A). The above dual is a prominent a feature of the Q-UEL language [20][21][22][23][24][25][26][27][28][29][30][31][32][33] which is based on the Dirac notation. The dual is particularly important in the construction of inference nets that, unlike Bayes nets, and not confined to a Directed Acyclic Graph (DAG) and are thus free of the severe independency assumptions that this can imply (see e.g. Ref. [30]). Recall (Section 1.2) that Q-UEL can be considered as an extension to XML for probabilistic semantics and Artificial Intelligence. Note that one cannot deduce K(A; B) precisely from P(A|B) and P(B|A), nor vice versa (though there are constraints) but from just these three measures K(A; B), P(A|B), and P(B|A), many other probabilistic measures can be calculated, notably prior probabilities P(A) including prevalence, joint probabilities P(A,B), negative forms P(not A, B), and so on, and hence many basic measures of epidemiology and evidence based medicine such as positive and negative predictive value, predictive odds, likelihood ratios including risk factors, odds ratios and so on. Unlike XML, in Q-UEL, all tags can take on algebraic and arithmetic force and can be used directly as building blocks of inference networks for automated reasoning. Also recall (Section 1.2) that an example of a typical basic Q-UEL tag being used in programming mode for automated reasoning by an inference net is Here entries that are replaced by specific values are in italics. pfwd (probability forward) refers to conditional probability such as of form P (A|B), say 0.93, and pbwd is its adjoint form such as P(B|A), say 0.27, these being important when tags are used in construction of a Hyperbolic Dirac Net (HDN) as an inference net as described later below, in Section 4.1. Conditional probabilities P(A|B) and P(B|A) are usually sufficient when we can interpret Dirac's basic brake <A|B> (see below) as <A | if | B> or <B | are | A>, or (with caution) < A | 'is caused by' | B > and < B | causes | A >. All these have an analogous interpretation in quantum mechanics, in terms of vectors <A| and |B>, and Hermitian operator and matrices [21][22][23][24][25][26][27][28][29][30]. When the value of an association constant, say 5.64, is also to be assigned, its value can be included in various ways, e.g. by assigning the values {pfwd, pbwd}, assoc to the tag. When generated tags are generated by structured data mining, stored for long term use in the Knowledge Representation Store (KRS), exchange on the Internet, the format is as follows. < subject-expression Pfwd:=pfwd | relationship-expression assoc:=assoc | object-expression Pbwd:=pbwd > For implementation of the method in automated reasoning including by inference nets, which unlike Bayes Nets can be bidirectional general graph and evolve under rules of categorical logical, grammar, and definitions, it is important that the value of the above tag, say in general < A | R | B> when analogous to <A|B> = <A | if | B>, has the following hyperbolic complex value. Here h is the hyperbolic or split-complex imaginary number such that hh = +1, rediscovered in various guises by Dirac (e.g., as linear operators and γ-matrices). The above also reveals that the probability dual {P (A|B), P(B|A)} is one way of writing that value, i.e., of writing quantum mechanical, but purely h-complex, probability amplitudes. Q-UEL stands for "Quantum Universal Exchange Language" because it builds on the Dirac notation and Dirac's associated algebra for quantum mechanics. See Refs. [21][22][23][24][25][26][27][28][29][30][31][32][33] for explanation and discussion; the important point for present purposes is that these tags representing elements of knowledge can be brought together in various ways for probabilistic inference and probabilistic semantic reasoning (Section 1.2). In Sections below, the emphasis is on the Q-UEL tags generated by the various methods, and they are sufficiently readable by the human eye that their use in reasoning, and their use in carrying knowledge between algorithms, can be appreciated intuitively (see in particular Refs [24][25][26][27][28][29][30][31]). While the Semantic Web is not inherently probabilistic and there is lack of agreement on the best probabilistic approach (see discussion in Ref. [21]), the Q-UEL language is compatible with it because probabilistic inference includes handling the case of certainty, i.e., with P = 1 as a limiting case. However, there is a twist. Because statements and data on the World Wide Web are not always true or certain, or represent an error or do not apply to all possible cases (e.g., do not have universal scope), or change with new information (as was and is commonly the case for web pages about COVID-19), Q-UEL tags from data mining unstructured data as natural language text and bioinformatics sources are often annotated as to provenance and time-stamped. Importantly, in any computation using them, they are treated as assertions. They take the value 1 as in e.g. P(A|B) = P(B|A) = 1, and odds take the value 1, by default until there is evidence to the contrary. Perhaps at first counterintuitively, 1 indicates uncertainty or lack of impacting knowledge. In such cases, Q-UEL tags also assume that by default K(A; B) = 1 and collectively satisfy the requirement that the mutual information content is I(A; B) = ln(K(A; B)) = 0, importantly have no effect on a purely multiplicative inference net, and are in accord with Karl Popper's theory of scientific knowledge discussed in e.g. Refs. [21,24,25,27]. Some preliminary Q-UEL tag examples from the COVID-19 studies For example, during the first few days of the SARS-COV-2 studies methods of interacting with the Internet and source data developed for human genomics [32,33] (see below) were refined to generate a tag containing the spike glycoprotein sequence [8] that could be stored in a Knowledge Representation Store (KRS) for future use as input and as a record of the data at the time [9]. Note that, consistent with the opening remarks in Section 2.1, and because of the implied defaults in Q-UEL concerning probabilistic values, one can in this kind of case make use of similar ideas without reference to the underlying mathematics. The tag is quantitative but only in the sense that it carries biosequence information: there are no probabilities mentioned. Indeed, the above could obviously be automatically expressed in XML, although the result is typically more complicated and less readable and would still need a Q-UEL-like system to interpret and use the information carried. For example, unlike XML attributes, those in Q-UEL can have a rich formal ontological structure, e.g. defined as the Attribute Metadata Language (AML), e.g. a form A:= B, or A:=(B,C) or A:=(B:=C, D:=(E:=F,G)), and so on. Minimally it has the form metadata:=value, e.g. gender:=male although some exceptions are optionally allowed for authorized nominal categorical data such as male, that can stand alone. The metadata operator ':=' has the value, more specific instance, or example to the right. One advantage of this is that different ontological structures for the same basic kind of information can be expressed and combined, and a main purpose of Q-UEL is to enable interoperability by using this as a kind of "tourist phrase book". Another difference from XML is that several attributes form a logical expression and the default logical operator between attributes, if not shown, is AND (arguably, if the organization of attributes in XML implies a logical expression at all, the basic formalism confines it to AND logic). The above is an example of a well-annotated tag of the type that is usually for permanent storage in the Knowledge Representation store KRS. Simpler tags derived from them can be used as working tags, sometimes temporary, for semi-interactive explorative studies. In contrast to the above tag that has implied default Pfwd, Pbwd, and assoc attributes of value 1, the following is a tag derived by analysis of many such sequences and the associated three-dimensional structures where known, to derive statistical relationships between amino acid residue sequence patter and secondary structure as α-helix (H), β-peated sheet or extended chain (E), and coil or loop (C). Two example applications of this are to study the effect of sequence changes in new variants, and to predict surface coil or loop as a potential B-epitope that can initiate antibody response with antibodies capable of binding to that region. Note that tag value attributes Pfwd, Pbwd and assoc are no longer absent and so no longer imply value 1 by default. In essence, the above is an example of a quantitative, probabilistic element from machine learning and many such tags describing many protein sequences are used to predict the conformation of each amino acid residue in a given sequence as being in an H, E, or C state. In earlier parlance used in the field of protein science, it represents a GOR parameter for secondary structure prediction and similar kinds of prediction, but now in Q-UEL format. The input for the machine learning process is typically the data in proteins of known sequence and conformation derived directly or indirectly from the Protein Data Bank https://www.rcsb.org/. For example, the following is the sequence of the receptor binding domain in 6M0J, the crystal structure of SARS-CoV-2 spike receptor-binding domain bound with ACE2 which is of particular interest in Results Section 4. It also further illustrates some common features of Q-UEL use. It is another importance source of amino acid residue sequence information. The methods for generating tags including known secondary structure descriptions and the above Q-UEL-QGOR tag as a result of machine learning from them are described in Refs. [29,33,43,45]. For knowledge that is primarily from structured bio-sequence data combined with unstructured, textual data, points relating to more general principles are most profitably illustrated by examples in overview. Recall that when the author of this review began his studies on the Wuhan Seafood Market isolate entry MN908947.3, it had only been known for a few days that it was a coronavirus, and knowledge gathering tools were required because he had relatively little knowledge of coronaviruses. Further, authorities were maintaining, perhaps to alleviate public concerns, that it was not SARS. The immediate impression gained by use of tools to access standard bioinformatics tools and database on the Internet was that the important spike glycoprotein was essentially the SARS spike glycoprotein. In the first few days, the closest homology (sequence match) of the spike protein with proteins on the protein entries of GenBank by BLASTP were bat coronavirus spike glycoproteins with 77%-81% sequence identity. Just after the work of first study [8] was completed, but in for the second paper [9] submitted on 2nd February, a bat coronavirus spike glycoprotein with 97.41% identity of that of the Wuhan Seafood Market isolate appeared on GenBank, on 29th (submitted 27th) January 2020. This was the controversial posting by researchers at the CAS Key Laboratory of Special Pathogens, Wuhan Institute of Virology, Center for Biosafety Mega-Science, Wuhan. However, even the first matches unlocked a wealth of relevant information that was known for SARS-COV-1 and likely to apply to SARS-COV-2. Importantly, the SARS coronavirus spike glycoprotein of the 2002-2003 infection, now called SARS-COV-1, was known to bind the ACE2 receptor initially and to have two cleavage sites associated with entry to the host cell. The three-dimensional structures SARS-COV-1 spike glycoproteins had been determined, notably entry 5XLR that had been obtained in 2017 by cryo-electron microscopy to 3.8 Å and refined by conformational calculations, and it was possible to overlay the corresponding important sites such as cleavage sites [8,9]. That study determined the three receptor-binding C-terminal domain 1 (CTD1s) of the S1 subunits in symmetric "down" positions. The binding of the "down" CTD1s to the SARS-CoV-1 binding sites to receptor ACE2 was not possible due to steric clashes, suggesting that the conformation 1 represents a receptor-binding inactive state. Conformations 2-4 also examined were found to be symmetric showing that the ACE2 binding region rotates away from the "down" position by different angles to an "up" position, while the "up" CTD1 exposed the receptor-binding site for ACE2 engagement. It was also known that the above conformational change is also required for the binding of SARS-CoV-1 neutralizing antibodies targeting CTD1. This description could be extended to other betacoronaviruses using CTD1 of the S1 subunit for receptor binding. The beta coronaviruses include OC43 and HKU1 (which can cause the common cold) of lineage A, SARS-COV-1 and SARS-COV-2 both of same lineage B, and MERS-CoV of lineage C. Consequently, it was reasonable to suppose that the SARS-COV-2 spike glycoprotein had similar structure and behavior to that of SARS-COV-1 [8,9], as turned out to be the case when SARS-COV-2 spike glycoprotein structure PDB entry 6VYB became available for comparison on 11th March [10]. Q-UEL tags in the earlier genomic studies, generated some 9 months prior to COVID-19, were still retained in the KRS, and played a role in further studies investigating the role of mitochondrial signaling. Signaling by peptides encoded on small open reading frames in the mitochondrial DNA mitochondria is known to be involved in response to cell stress. One of these earlier example tags is as follows. Nonetheless, subsequent studies of knowledge gathering from the literature showed that in some respects mitochondria seek to "carry on regardless" to maintain basic housekeeping functions in the cell [32]. In contrast, though, the knowledge captured on tags in the earlier studies usefully contained galectin-3, which also expressed in mitochondria as well as nucleus, cytoplasm, cell surface, and extracellular space, is involved in hyperinflammation and fibrosis in severe covid-19 patients. As further auto-surfing revealed, Galectin-3 regulates mitochondrial stability and antiapoptotic function, mitochondrion, cell surface, and extracellular space. It appeared to be a molecule worth considering both as a target for inhibitors and perhaps as a diagnostic biomarker for extreme COVID-19 response including acute respiratory failure. This turns out to be the case and these studies and the literature will be reported elsewhere. Such approaches are important irrespective of the vaccine development strategy Since the above examples relate to protein sequence and structure, while the vaccines produced by industry against COVID-19 were (for the first time to any extensive degree) DNA and RNA vaccines, it is useful to comment on why the kind of Theory and implementation discussed above (and Methods described below) remain important. In principle, in considering the model f(k(X; Y), k(Y; Z)) → k(X; Z), one might argue that it is best written as f (k(X; Y |c), k(Y; Z |c)) → k(X; Z |c), that ensure that it relates to some specific context or conditions c. Asking that we can replace k(X; Z |c) to k(X; Z |c') might well cause f to fail. This is reflected in the discussion in Section 2.1 regarding assertions and particularly considerations of scope of a statement. Notably, it is not necessarily obvious that some features of the present author's early studies on design of vaccines against SARS-COV-2 were necessarily relevant to the kinds of vaccines that were the first and most successful in combatting the COVID-19 pandemic. The initial papers focused largely on peptidebased vaccines, diagnostics, and peptidomimetic drugs [7][8][9][12][13][14]. However, the computational aspects, including prediction of the epitopic sites in the pathogen proteins that form the basis of hoped-for peptide vaccines, remain no less valid and no less general. That is because these sites appear in the biotechnologically produced proteins, the spike glycoprotein in the COVID-19 vaccine case, in most cases encoded in the RNA or DNA of the COVID-19 vaccine constructs as discussed above in Introduction Section 1.5. Most notably, these sites can change with different variants, and have done so for variants of concern. For example, the first tag in Section 2.1 containing the spike glycoprotein sequence could then be used to access other information such as three dimensional structures determined experimentally for SARS-COV-1 and SARS-COV-2 proteins and their interaction with antibodies and the ACE2 receptor, and initiate initial bioinformatics and protein structure analysis studies such as changes in exposure of sidechains considered as the basis for peptide-based diagnostics, vaccines and peptidomimetic drugs [14]. Here # indicates ACE2 binding, and @ antibody binding. A conformationally disordered loop is ~. Extent that sidechain is buried is given by scale 0,1,2,3,4,5,6,7,8,9,X. A smoothed score over neighbors is shown in every case below it, as indicated by 'sm.' At the end of the description to the right, and in the smoothed score the residue obscured by glycosylation is indicated by % [15]. For more details, see Ref. [15]. Inevitably every COVID-19 vaccine tried, tested, and found promising inevitably features the spike glycoprotein subsequence targets believed to be first discussed and proposed publicly by the present author, albeit because they generate or contain the entire SARS-COV-2 spike protein or most of it. They include the content of the above tag which is important as the site of binding of many antibodies raised against the spike protein. However, it was noted that region of the sequence is variable amongst coronaviruses, and attention shifted to the above highly conserved KRSFIEDLLFNKV motif [7][8][9]. The earlier work [7][8][9] including tags like that above, nonetheless also remains relevant in its details. The whole spike protein does have advantages of multiple sites and polyvalency, and of taking care of glycosylation in a natural way that complicate (but by no means disqualify) use of peptide vaccines, but the possible advantage of focusing on individual segments rather than just presenting the whole spike protein is that it is likely to focus the immune response or conserved regions. For example, the current notorious omicron strain of SARS-COV-2 has many mutations in the regions reviewed prior to omicron in Ref. [14] as most readily binding antibodies and thus responds to the current vaccines rather weakly as far as the B-cell antibody response is concerned. Consequently, vaccinated individuals can be readily infected even though the cellular T-cell response remains so that the disease is less severe. There is the argument that this situation of a high incidence rate and low fatality rate is an advantage in terms of building up heard immunity, a seeming satisfactory approach if the natural infection were to approach the effect of large-scale vaccination by an attenuated virus preparation. It would seem a very risky strategy, however, for many reasons, not least because there is no guarantee that a new highly infectious strain will be "less kind" regarding the fatality rate. Methods by example: practical support from automatic knowledge gathering By "Methods" here is meant an account of how Internet information and particularly the Q-UEL tags derived from it are used in the context of strategy for responding to an emerging epidemic, especially as regards the workflow. It is also arguably appropriate to consider here those less obvious knowledge-gathering tools that support the above studies of Section 2 in a practical context, and which speedily provide an initial orientation as to how to go about the kinds of bioinformatics studies touched upon as examples in Theory Section 2. Although it is somewhat of a simplification, the software tools used in the COVID-19 project could be broadly classified into three types that may be arbitrarily called A, B, and C. This also reflects, to a large extent, their order of use in a workflow. Tools A comprise those involved in knowledge gathering from the internet, including generation of alerts based on news items regarding possible new infectious diseases or variants of concern. The results of this may initiate and shape a subsequent project, so they are of crucial importance in the sense that tools B and C will not be invoked to address a potential epidemic without them. Tools B are those that interact with web pages to obtain data and make use of standard bioinformatics tools. Notably, DNA, RNA, and protein sequences are extracted from GenBank https://www.ncbi.nlm.nih.gov/genbank/, with annotation when desired, SIXPACK, e.g. at https://www.ebi.ac.uk/ Tools/st/emboss_sixpack/is used to convert DNA or RNA sequence to protein amino acid residue sequences (primary structure), BLASTP e.g. at https://blast.ncbi.nlm.nih.gov/Blast.cgi, is employed to find similar sequences, Clustal Omega, e.g. at https://www.ebi.ac.uk/Tools/msa/ clustalo/, is used to align many similar sequences and construct evolutionary trees of relationships, and the PROSITE data base at https:// prosite.expasy.org/is used to annotate protein sequences. There were three justifications for this use of standard tools. The first is that there is little point in "reinventing the wheel": much effort by others has gone into the development and refinement of the algorithms. The second is that the methods are indeed standards in effect, with standard default options, and researchers can expect them to behave in a familiar way and for the results to have a particular meaning. The third is that, for the present author and collaborators, this use of the World Wide Web is entirely consistent with the original intent for Q-UEL to be a Webcentered interoperability language [21][22][23][24]. The use of a Q-UEL approach to facilitate tool use is that, where appropriate, the public tools can be accessed "behind the scenes" and integrated together with the rest of the Q-UEL system in the manner of a workbench. That was particularly valuable because the integrated Biology Workbench developed at the University of Champagne-Urbana and later implemented at the San Diego supercomputer center has been unavailable for some time due to funding difficulties (http://workbench.sdsc.edu/). Tools C include those algorithms which are not available as standards on the Web, or which do not produce exactly the kind of information required, or in the required form. Those of importance in the COVID-9 project included (i) improved methods of secondary structure prediction, including prediction of surface loops as potential epitopes for vaccine and diagnostic development that can achieve 90%-99% three state (α-helix, β-sheet, loop) accuracy by making make maximal use of large numbers of known protein three dimensional structures without alignment, (ii) prediction of binding sites on pathogen proteins that bind to host sialic acids and hence e.g. mucins in the respiratory and alimentary tracts, and (iii) measurements of sidechain exposure along the protein sequence where the three dimensional structure is known to assist in design of diagnostics, vaccines and peptidomimetics, with particular emphasis on discovering and reporting how exposure increases or decreases with antibody and receptor binding locally and at remote sites when data is available for such complexes. More recently (and not previously described) there are two further kinds of tools that are being developed to help combat COVID-19. These are discussed in Sections 4.3 and 4.4 later below. All these processes can be linked in a workflow by information exchange and control via the Q-UEL language. Protein modeling and threedimensional simulations such as the use of molecular dynamics and molecular mechanics are a natural further step to assess conformational and binding free energies, and the means of using the Q-UEL language to drive these will be described elsewhere, but they played a relatively small role in the COVID-19 project with the important exception of ligand binding studies for drug discovery purposes, i.e., determining which potential chemical compounds had appropriate least free energy of binding and hence appropriate binding strength. Otherwise, the emphasis was on use of empirical experimental data when available, including data from the experimentally determined three-dimensional protein structures and protein complexes, not least because of the need for considerable computer power needed to calculate accurately important entropy contributions, as discussed later below. The impression should not be given that, for a new and unexpected crisis such as an emerging new kind of epidemic, all tools will be available and well-honed though practical experience. In the above application of Q-UEL, a concept of methodological importance wellknown to programmers is extreme programming (XP). See below. This is not a well-defined formal approach, however. The relevance here is that the essence of a useful response method to combat emerging epidemics is that it is speedy, accurate, and successful, or at least plausible and logically founded, given all the knowledge that is principle available at the time, but not all requirements can be predicted in advance. For example, the molecular biology of pathogens of many epidemics such as AIDS and Mad Cow Disease had several novel aspects. Overall, the approach taken for COVID-19 could be described as using Q-UEL as an architectural principle, but importantly also as a means of facilitating extreme programing. Extreme programming is a software development philosophy very suited to unexpected features of emerging epidemics because it intended to allow rapid response to changing requirements while ensuring a reasonable degree of software quality. In practice, such an approach was necessitated in the COVID-19 project by the following considerations. Despite the present author's interest in bioinformatics and rapid response to emerging infectious diseases, neither the author nor the Q-UEL system were well-prepared with knowledge and expertise concerning coronaviruses. Indeed, Q-UEL per se had been primarily developed for use cases in clinical decision support, e.g., diagnosis, selection of best therapy, and prognosis and determination of risk, and for detection of medical claims anomalies and fraud. The diseases of particular interest had been such as congestive heart failure, renal failure, and cancers, i.e., not primarily infectious diseases nor necessarily associated with them. An extensive bioinformatics component had only begun to be introduced to facilitate the study of genetic factors. Epidemiological interest, although represented, had been primarily concerned with toxicological aspects of public health such as air quality and its impact on clinical decision support. Monitoring of infectious disease in populations had only just begun to extend that in a natural way. Consequently, in the early days of the COVID-19 project, Q-UEL and associated tools were repurposed and adapted "on the fly" and with frequent manual intervention because the pressing need and focus was naturally regarding a rapid response to the rise of COVID-19, not on further commercial tool development. Fortunately, the specification of Q-UEL lends itself well to such rapid application and adaptation, i.e., as extreme programming. That includes capture of expertise by progressive conversion of manual intervention to an automatic protocol, usually expressed in Q-UEL itself. Typically, this involves taking a template program in which a Q-UEL tag is represented with parts that are variables and modifying preceding regular expressions (match-and-edit instructions) to extract text and numeric information from webpages or other incoming information, to assign values to those variables. Such short program scripts (including data capture, regular expressions, and tag template) are known as converters, because they convert a variety of source information in diverse formats to the canonical Q-UEL form. In many instances, the incoming information can already be in Q-UEL form, and often this involves combining several tags, containing knowledge from different sources, into one tag. As indicated earlier above, the important initial steps in workflow, from which all else follows, involved the use of Q-UEL to auto-surf the Internet and text sources to extract knowledge from natural language text [25][26][27], given one or more simple initiating queries such as "SARS" or (later) "COVID-19". Several queries like multiple choice answers in a multiple-choice medical licensing exam, and a body of text analogous to the examination question, can be used along with tests of valid authoritative biomedical text by lists of appropriate Latin and Greek roots, medical terms, along with a dictionary of words and phrases more characteristic of non-authoritative medical sites and non-medical sites generally, help ensure relevance and focus [26]. Prior to that, automatic monitoring of the Internet can alert that a new disease or variant can be arising [17]. At present, there is inevitably some screening by humans of the information being obtained to identify cases that are more likely to be of genuine concern, though clearly developments in AI will help filter the wealth of information that can be generated. In most cases this involves manual or semi-automated curation of Q-UEL XTRACT tags that carry extensive annotation about the source and context and are timestamped. There are two reasons why the present author personally suspected and investigated an emerging pandemic from the Wuhan Seafood Market isolate with some promising features from that study as described above. The first is simply because of a continuing interest in identifying and responding to emerging epidemics. The present author had experience for several years as an epidemiologist in the Caribbean studying emerging viruses such as Zika [17] when also a Professor of Epidemiology, Biostatistics, and Evidence Based Medicine, as well as with earlier pandemics. Earlier, he also had earlier experience leading the teams that responded first to HIV [18] followed by several HIV diagnostics patents, and on Mad Cow Disease (BSE), inventing the vaccine marketed worldwide by Abbott Laboratories, e.g. Ref. [19], as well as several animal vaccines, diagnostics, or immunotherapeutic agents. However, he was almost completely ignorant regarding coronaviruses, and the availability of Q-UEL knowledge-gathering tools was hugely beneficial. It was an approach which can also be applied to structured and semi-structured data including genomics and bioinformatics data [32,33]. While experience of any kind of problem can certainly help, a main priority should be to capture that expertise so that any researcher can use it. Moreover, it should be possible, where desirable, to make research study recoverable and reproducible when used by the same researcher, or anyone else. It is primarily these considerations that necessitate automation, but it also means as discussed in Section 2 that knowledge captured should include information as to circumstances and provenance. For example, the autosurfing of the Internet for latest updates regarding COVID-19 encountered the following elements of information from Wikipedia, cited and dated as shown, which is important as Wikipedia content can be updated, and this was especially so for the COVID-19 pandemic. Some further examples of the Q-UEL approach, and of Q-UEL tags relevant here, are as follows. For example, an original extract (an XTRACT tag) obtained by autorsurfing and knowledge Fig. 1. Seven key technologies that are argued to be important for early response to emerging epidemics. capture was as follows. As mentioned in Introduction Section 1.2, the sometimes-stilted form when a Q-UEL tag is read directly by eye (which it need not be) is because sentences, subsentences or integrated sentences are reparsed into as linear sentence structure as possible, so that if required they can be easily decomposed into semantic triples, i.e. <A | relationship |B>, < B | relationship |C >, and so on, and used in the most common forms of automated inference. They are still natural-language-like, which facilitates development, debugging, and maintenance. They are also intended to be responsible for robustness of a medical system if part of the IT and communications infrastructure is lost in a disaster, since they can still be understood by eye with relatively very little effort. By August 2020 it was possible to obtain more detail regarding understanding of the pandemic and its symptoms, for example, as follows. The unusual link law.moj.gov.tw/ENG/LawClass/LawAll.aspx?pcode = L0050039] in the second of these tags is till active at the time of writing this paper, and relates to as study article "Special Act for Prevention, Relief and Revitalization Measures for Severe Pneumonia with Novel Pathogens" from the Chinese Ministry of Health and Welfare, as a study item for Chinese law students. Following general knowledge capture of the above kind, integration of knowledge with the kinds of bioinformatics studies of Section 2 becomes possible. Several other useful observations could quickly be made, including many regarding potential "in-a-pill" therapeutics and much closer to the X, Y, Z model (Section 2.1) as applied by pharmaceutical chemists. Notably, it was known that the small plant compound emodin was an inhibitor of SARS-COV-1 infection and cell entry and of the human inflammatory response enzyme 11β-hydroxysteroid dehydrogenase type 1. Though perhaps more familiar as a laxative, it has many known beneficial effects including anticancer, anti-inflammatory, antiviral, antibacterial, anti-allergic, anti-osteoporotic, anti-diabetic, immunosuppressive, neuroprotective and hepatoprotective properties. It was thus possible for the present author to continue early computational studies on the experimentally known potent inhibitors of that enzyme, now extended to computational studies on emodin and similar compounds (e.g. Refs. [8,9]). In later exploring a broader panel of potential drugs and protein targets, Remdesivir as a broad-spectrum antiviral now in use against SARS-COV-2 was not in the published list of which the present author explored binding affinities, but the closely related natural ADP-ribose-1 ′′ -phophate compound and drugs with similar binding affinities and some similar features such as cancer and lymphoma drugs, and notably Favipiravir which at high doses has potent antiviral activity in SARS-CoV-2− infected test animals, were investigated and reported [13]. First, for the latter study, a highly conserved sequence motif in Nsp3 of SARS-CoV-2 was investigated as a therapeutic target to capture knowledge about the functions of similar sequences, in examples of automated surfing of the Internet to gather related in other virus and prokaryotic and eukaryotic organisms in semi-structured data sites. By more general auto-surfing, it was quickly found that the SARS-COV-2 protein of interest contained a universal nucleotide binding domain called the macro domain. Not all the tools found of value in the COVID-19 project can be discussed here, but in briefest possible review the following is of interest: see Fig. 1 as a guide in the context of COVID-19 and future possible epidemics. In the opinion of author of the present review, there are seven key technologies that will be important for early response to emerging epidemics involving new pathogens or strains. As Fig. 1 states, the Q-UEL language [21][22][23][24][25][26][27][28][29][30][31][32][33][34][35][36][37][38] has been used to help study them and is progressively incorporating them. Proceeding from the top clockwise, these are (i) the use of new generations of peptide biomarkers [32,33], (ii) analysis of patient genomics (including proteomics) regarding response to pathogens [28,29,32], (iii) improved de novo modeling of proteins such that large loops on polymorphic patient proteins, and not least of those on the pathogen proteins and their interactions with receptors and antibodies, can be simulated (including with better entropy calculations) [34], (iv) automated reasoning in public health [36,37], (v) alternative futures analysis (discussed below, using the example of different paths in development of COVID infections), (vi) high-dimensional analytics [24,27,29,30], and (vii) management of Real-World Data including interoperability [20][21][22][23]27,31]. These are believed to meet at least many of the challenges raised in Ref. [1]. General observations In most of the previous Q-UEL papers the new Q-UEL tags and algorithms that the papers introduced have been considered as new results to be placed in the Results Section, but the emphasis in this paper is review is to a large extent on established Q-UEL tags and algorithms and on their benefits for a fast response to an emerging epidemic, e.g., for rapid production of diagnostics and then vaccines, and ideally therapeutic drugs. It therefore seems unfortunate that in no case could it be said that a diagnostic, vaccine, or new drug against the COVID-19 virus SARS-COV-2 was developed directly by the author, even in collaboration. However, it was almost certainly possible in principle, at least on a laboratory scale and if given sufficient laboratory resources, because diagnostics and sometimes vaccines based on raising antibodies in test animals were soon constructed and tested by collaborators in previous emerging epidemics based on earlier computational methods such as those described in Refs. [17][18][19]. It is relevant to note, though, that the major difference in the case of the earlier efforts was the much greater span of time between isolation of the pathogen responsible and proposals for diagnostics and vaccines. In the case of AIDS, the virus was isolated in May 1983 but the proposals by the authors and collaborators were made in January 1987, although it took some time for researchers to establish sequences for genes for at least two variants of a key surface protein, comparison of which was important to establish subsequences as vaccine targets [18]. More to the point, for COVID-19 the speed of response and publication and in particular the high level of citations of the first two peerreviewed publications and increasing daily indicates that the work has been helpful to COV-19 researchers. Several aspects discovered by the bioinformatics and knowledge-gathering approaches described above have been followed up particularly well by other researchers, notably the subsequence KRSFIEDLLFNKV which has been commercially available as a peptide product by several commercial organizations for research purposes, as discussed in Section 4. 2 below. Although the ACE2 receptor binding region and antibody binding region was well studied in the above studies [7][8][9]14], as well as prediction of the binding of the spike glycoprotein head to host cell sialic acids [12] and the core of the highly conserved Nsp3 domain as potential pharmaceutical targets [13], probably the best-known observation from the above studies is that KRSFIEDLLFNKV is a likely Achilles heel for SARS-COV-2 [7][8][9]. This area of research is discussed as an example in section 4.2. Some useful new tools may be considered as results, though in many cases they were modifications of algorithms in previous papers. That included bioinformatics tools and tag types that were considered in papers prior to those reporting the COVID-19 study, e.g., the report on the study the mitochondrial genome [32] published just before the COVID-19 investigations (on January 12, 2020). Other tools can reasonably be considered as technological results of the COVID-19 project. These tools included an algorithm for predicting sialic acid non-covalent binding sites on proteins [12], and a novel algorithm for sidechain exposure and study of the effects of, e.g., viral surface proteins interacting with antibodies and host cell receptors [14], which is described in more detail in Section 4.5 below. There was use of a new protein secondary structure prediction algorithm capable of three-state with 90-99% accuracy when using contemporary large data bases protein structure [45]. This was mainly split away from the main line of COVID-19 papers, so its value within the COVID-19 project, and first application to SARS-COV-2 proteins is emphasized here. For present purposes, the main importance of this is that loops are of particular interest as verifying putative B-epitopes which raise an antibody response and as regions which bind to the antibodies so raised. Importance of invariant regions: example of the KRSFIEDLLFNKV motif An important consequence of the project overall was the importance placed on invariant regions of the proteins of the group of viruses to which SARS-COV-2 belongs. As was pointed out from the very first paper, a highly conserved region across at least a group of the coronaviruses, and especially one at the protein surface and exposed to the environment, means at least that (a) it serves some important function or functions for members of that group, and (b) also represents a site that is not likely to change easily so that established diagnostics, vaccines and drugs based on the original causative agents of the epidemic might become useless. In other words, they represent an Achilles heel of the virus. Many things about SARS-COV-2 seem obvious in hindsight but were not so at the time. Once the first papers [7][8][9] had confirmed that the Wuhan Seafood Market isolate was essentially SARS, and once detailed alignments of spike glycoproteins from many spike proteins were performed with particular attention to the KRSFIEDLLFNKV region, its likely importance quickly became clear. It was notable by virtue of being functionally important to the SARS coronavirus and less susceptible to accepted mutations. While the ACE2 region has also now been well studied as a target by other researchers, it was soon seen to be variable amongst coronavirus and at risk from escape mutations, a notion supported by the emergence of the omicron variant (see below). In contrast, KRSFIEDLLFNKV is a well conserved motif across all the coronaviruses, and arguably recognizable beyond them into the nidoviruses. The S2 ′ region including it (see below), residues 800-839, is quite well conserved in the coronaviruses in the sense that amino-acid substitutions by accepted mutations that would be considered conservative substitutions, i.e., they have similar physicochemical and conformational properties. This is especially so around the C-terminal (right hand side) arginine R constituting the S2' cleavage point in bold and underlined, though substitutions of the phenylalanine F by other hydrophobic residues in common. Notably, RSAIEDLLFDKV is characteristic of common cold coronavirus, also found in the coronaviruses of dogs, cats, rodents, pigs, rabbits, camels, ferret badgers, raccoon dogs, etc. [9]. So far this has, as predicted, been conserved in SARS-COV-2 variants, and the accepted mutations in the omicron variant are not found in this region (mutations D796Y and N856K are the closest along the sequence). The subsequence KRSFIEDLLFNKV is functionally important to the virus because it includes the S2 ′ cleavage site at the arginine R, involved in the key stage of virus entry into the host cell. Presumably to protect itself from antibodies and untimely enzymic cleavage, the spike glycoprotein exposes its functional sites in a series of steps. KRSFIEDLLFNKV is partially exposed even in the closed state and more fully exposed after binding of the spike to ACE2, as well as after antibody binding at and near the ACE2 binding site [9]. This S2 ′ cleavage appears to be absolutely required for reconfiguration of the spike to attain significant levels of fusion between virus and host cell membrane following initial binding at ACE2. The S2 ′ site resides at some distance from the ACE2 binding region, at the stem of "bundle-of-flowers" of the trimeric S proteins, so the details of the activation mechanism prior to SARS were not obvious. The conformation 1 of SARS coronavirus earlier determined experimentally was three-fold symmetric and has all the three receptor-binding C-terminal domain 1 (CTD1s) of the S1 subunits in "down" positions. It was primarily pre-COVID-19 studies on the cleavage process of S protein of MERS-CoV that had shown how S1/S2 cleavage occurs first and increases the exposure S2' site for enzymic cleavage, and because of spike glycoprotein homologies with SARS and the Wuhan Seafood Market isolate, a similar cell entry mechanism seemed likely, as follows. The "down CTD1" of S1 protein locates immediately above the S2 subunit, and the opening of CTD1, especially by binding the receptor, would remove steric restraints and trigger rearrangement of the spike protein trimer with the release of the S1 subunits and extension of pre-fusion S2 helixes to form a post-fusion S2 long helix bundle. Seeking to blocking this crucial cell entry process for SARS-COV-2 has become a recognized strategy [38][39][40]. For example, in a subsequent paper [38] by a Chinese research group it was shown that EK1, a pan-coronavirus fusion inhibitor, targeted the HR1 domain (894-966) of S2 protein and could inhibit infection by many human coronaviruses. The segment KRSFIEDLLFNKV has now been studied by many other researchers. Genetic Engineering and Biotechnology News of December 14, 2021 [39], referring a research paper on Keeping SARS-CoV-2 Reinfections at Bay [40] that cites the present author, notes that "To counteract the loss of valuable financial resources and specialized professional facilities produced by escape mutations during the development of vaccine, a convergent effort toward discovery of highly immunogenic conserved sequence of viral proteins is dire need of the day. It has been reported that the amino acid sequence motif KRSFIEDLLFNKV, found in spike protein, is one of the conserved regions in coronaviridae family. The motif is partially associated with cellular entry of virus into host cell. Researchers consider this sequence as one of the most vulnerable yet conserved sequence in coronaviruses. Exploration regarding the role of this spike protein sequence is necessary to assess and confirm the degree of attachment of virus to host cell. It can be a valuable target for long-term immunity …". The peptide KRSFIEDLLFNKV has been advertised by many peptide synthesis companies and studies on and similar sequences as synthetic peptides have revealed interesting physicochemical properties. Notably, the conformation and aggregation of the peptide corresponding to the variant RSAIEDLLFDKV mentioned above has been studied [41]. Preliminary studies for a tool for predicting the severity of SARS-COV-2 variants Appropriate amongst what may be considered as results of the project are some new methods that emerge from that project. The following is a recent preliminary result in the author's COVID-19 project that has not previously been described and was developed while writing this paper. Normally, the natural and obvious choice of many researchers for predicting "variant of concern" (VOC) in advance of significant epidemiological data concerning its effects, but when the original and current genomes are known, is to estimate by computer simulations the free energy of binding of ACE2 receptors and/or antibodies [46]. Earlier simulations can be extended to include the affects of the accepted mutations in variants, and so assess whether they are VOCs [47]. A VOC score is thus envisaged by many researchers as ultimately based on computed free energy differences, between the bound and unbound state. The computational chemistry simulation approach is valuable and was used by the author to study the binding of potential drugs against COVID-19 [8,9,13] in cases where the binding site on the protein was relatively rigid. The problem is that such calculations of this kind are not wholly reliable not least because of difficulties in converging the entropy that can dominate determination of stable protein structures and protein-protein interactions [34]. This situation is worsened for computing a VOC score for SARS-COV-2 because the free energy of importance is the difference between unbound and bound states involving large conformationally disordered loops, mainly in the unbound reference state. Solving the proper behavior of a large conformationally flexible loop is akin to solving the protein folding problem [34]. But to worsen matters still further, the bound state is unstable in the sense that the whole spike glycoprotein is a flexible machine for cell entry. The functionally important conformational changes of the spike glycoprotein that occur on ACE2 and/or antibody binding and can be seen in experimental three-dimensional structure determinations of the complexes in atomic detail [14]. For simulations, this adds a further layer of much greater complexity. More reliable entropy calculations can only be attained by a great deal of computing power, and IBM's Blue Gene supercomputer, originally motivated by the desire to solve the de novo protein folding problem still failed to solve it despite many other useful success in protein science [34]. Even if we consider such computations as potentially highly accurate, it seems clear that for large proteins that can exists in several conformational states, with large flexible loops, that computation of the overall free energy (from enthalpy and entropy) can take a very long time even on a powerful computer. The argument is that quick "red flag" VOC score is needed that will at least put focus on what variants need to be examined first by more sophisticated computationally intensive methods. Building on the kind of tag shown in Section 2.2, the essential content of the Q-UEL tag attribute as displayed for alignment new variants would be exemplified as follows. A new feature is the use Greek characters χ, φ, Π, etc. that represents the nature of the change as difference between the physicochemical properties of the amino acid residue at that locus in the original Wuhan sequence and those of the accepted mutation, as given along with the basis of the algorithm in Table 1. The table also contains parameters, rounded to the nearest integer, that add up to a score for the new sequence based on the kinds of output described soon below. The empirical approach was as follows. The scoring will be such that the score as a Variant of Concern is 0 for the original Wuhan strain (the concern is as to it being an even more worrying variant, not that the original Wuhan strain was not worrying). At this stage this scoring method is intended to be crude in the sense of bundling together several transmissibility, morbidity, "long COVID", ability to evade diagnostic tests, neutralizing antibodies or drugs, ability to cause reinfection and severity of associated symptoms in different population groups. Initially, parameters were semi-automatically assigned by reference to SARS-COV-2 and its variants of concern but also with SARS-COV-1 variants, aligning multiple sequences and noting what amino acid residue features differed in variants that appeared to have had more serious consequences. Middle East respiratory syndrome-related coronavirus (MERS-CoV), e.g., GenBank entry QGV13484.1, was also initially included because as SARS-like outbreak starting in 2012 in the middle East has some similarities to SARS and COVID-19, but while SARS-COV-1 and SARS-COV-2 belongs to betacoronaviruses lineage B, MERS belongs to betacoronviruses lineage C with substantial differences, with a recognizable if weak sequence similarity only beginning at the segment RVQPTESIVRFPNITNLCP of Wuhan SARS-COV-2 and EAKPSGSVVEQA-EGVECD of QGV13484.1. This is outside the SARS-COV-2 RDB sequence, and so MERS was excluded from the investigation. Including SARS-COV-1 variants and comparison with SARS-COV-2 Wuhan reference sequence and variants must, of course, also be done cautiously, but it is possible to identify analogous residues and consider them as variants of the reference Wuhan strain, to some extent. For example, using the standard Clustal Omega tool at https://www.ebi.ac.uk/Tools/msa/ clustalo/, the sections of sequence associated with the ACE2 receptor binding domain in Wuhan reference strain GenBank MN908947.3 is aligned with Protein Data Bank entry 6NB6 which uses the SARS-COV-1 sequence in the structural determination deposited in 2018 of SARS-CoV complex with human neutralizing S230 antibody Fab fragment. Note that SARS-COV-1 represented by 6NB6 has significant changes in the ACE2 and antibody binding region involving the SARS-COV-1 6NB6 segment NVPFSPDGKPCTP-PALN (see also later below), not least the deletion '-', but polar, small non-polar (hydrophobic) or large non-polar character tends to be conserved which is in the accord with the principles of conservative substitution and do allow the customary groupings of residues with similar physicochemical properties as a starting point, and do earmark locations which are highly conserved across the SARS-COV-1 and SARS-COV2 betacaronavirus lineage B group. Amongst the changes that are more remarkable is that the asparagine N at the start of Table 1 Preliminary example parameterization to predict variants of concern. Change from Wuhan reference the above segment has a small polar sidechain while a glutamate E carrying a negatively charged sidechain occurs at the same locus in SARS-COV-1. This might be a feature that give rise to the medical consequences of SARS-COV-2 compared with SARS-COV-1, perhaps the greater transmissibility of the former. Importantly, also considered were the relationships between the different changes and the changes in conformation of regions when antibody or ACE2 bound, as seen in three-dimensional structure determination deposited in the Protein Data Bank www.rcsb.org, and analyzed as to degrees of sidechain exposure and chain conformational flexibility [14]. For example, the following is a condensed form of one of the blocks of the RBD (described more extensively and in more detail later below). Blocks were as in the original paper [14] chosen to conveniently capture, and appropriately partition, important features, e. g., that regions around as well as in the receptor binding domain that are potentially affecting antibody binding are included in the blocks. The scoring is based on a classification that includes the conformational effects of antibody and ACE2 binding but is based on the classes of physicochemical change that became apparent in the sequence comparisons in the most preliminary studies. The two kinds of change are combined in the following ways. Above, rows (1)-(3) show the Wuhan reference sequence, omicron variant, and a SARS-COV-1 subsequence in the main part of the receptor binding domain RDB. (4) shows ACE2 receptor binding contacts, (5) shows antibody binding contacts. Rows (6)- (8) show numbers 0-9 and X (for 10) show the extent of being buried aways from solvent in the indicated experimental structures, with ~ indicating conformation disordered loop in the indicated threedimensional structures. Row (9) indicates changes in the physicochemical properties of amino acid residues in SARS-COV-2 variants of concern using Π to indicate a change from a highly polar (charred) sidechain to a hydrophobic one, χ to indicate a change but to a sidechain of similar properties, þ to indicate a positive charge from neutral, -to indicate a negative charge from neutral, ± a change of charge, and φ a hydrophobic from polar neutral or glycine (G). These and the rest shown in Table 1 emerged as the significant changes from this study, and it is important to keep in mind that these do not relate to probability of accepted mutations over many proteins, and hence evolution's notion of physicochemical similarity [48], but to the consequences of selective pressure on SARS-COV-2 in favor of continued replication of the virus. Although the scoring parameters in Table 1 are empirical, they appear to fit some rationales. They do not reflect the general trend in protein evolution to conserve residues with similar sidechain properties [48]. They reflect, in contrast, selective pressure to reject binding by antibodies formed against previous variants (encountered by infection or vaccination) while at the same seeking to strengthen ACE2 binding, but presumably only to limited degree since the coronavirus have had substantial past opportunities to refine ACE2 binding. An accepted mutation that involves a radical change in physical properties and occurs in binding loop binding antibody or ACE2 is likely to have more serious implications, since it suggests a strong selective pressure, irrespective of whether the loop is disordered prior to binding, noting that the conformation of the loop on binding is not in general similar in the antibody and ACE2 binding cases. If the mutation occurs just in an antibody binding loop or in a separate ACE2 binding loop, and the loop is conformationally disordered prior to binding, it appears that a radical mutation is more likely to be accommodated by significant conformational adjustments of the binding loop, so that the specific nature of the change in physicochemical properties of the sidechain is somewhat less important. A fuller analysis of the whole receptor binding domain and neighboring sequence including the other strains is as follows. The scoring proceeds for the example of omicron as follows, noting that one reason that the method is preliminary is that it depends on what constitutes an ACE2 or antibody binding loop and a given degree of sidechain exposure depends on the author's method of classification of these features. Nonetheless, it is precisely defined with respect to the classifications in the above blocks, and the rationale and methods are given in Ref. [14]. Another reason for the preliminary nature is that it is obvious that any scheme is likely to score it highly relative to the Wuhan strain simply because of the large number of accepted mutations, but while at time of writing it is suspected that omicron should rate lower because of a decrease in severity of symptoms, it is early days for assessing that for all groups such as diabetics, and in particular including any "long COVID" consequences. Nonetheless, this is likely to apply in the early days of any new variant arising for any pathogen, perhaps especially a virus. Working through the sequence, the first two mutations in the omicron strain are neither in ACE2 binding loops # nor in antibody loops @, and so score 1 each by Table 1. The possibility of some effect on binding ACE2 and/or antibodies cannot be disregarded. The nest three mutations in antibody binding loops @ are designated φ which means that they are changes from polar neutral or glycine residues to residues with hydrophobic (non-polar) sidechains, and score 9 each by the Table. A mutation designated π signifies a charged residue replaced by a polar neutral residue, and occurs in an antobody binding loop of the spike protein, and so scores 3. A mutation designated þ neutral residue changing to a positive charged residue in an ACE loop, but not a conformationally disorganized loop, scores 8. A conservative mutation indicated by χ scores 2. The next four mutations χ, þ, ±, and þ all occur both in ACE2 and antibody binding loops of the spike protein and score highly at 10 each even though in conformationally flexible loops. The next three mutations þ, φ, and þ are in ACE2 binding loops that are disordered in the absence of binding, and score 7, 4, and 7 respectively. The last mutation is neither in an antibody nor ACE2 receptor binding loop so scores 1. Note that the last and 16th mutation lies outside the receptor binding domain and so is not one of the 15 mutations normally considered for omicron. However, its inclusion, as defined by the author's block convention, and could be considered as having a potential effect. The total score for omicron is 102 ignoring this last residue and 103 including it, relative to the original Wuhan strain that scores 0 by definition. Modeling the clinical effects of SARS-COV-2 variants Also appropriate amongst what may be considered as results of the author's COVID-19 project is a consideration of methods for predicting the effects of variants of concern on the course of disease, as seen both from the perspective of individual patients and in terms of impact on the healthcare system as a whole. The regions that vary significantly in evolution of SARS-COV-2 are of interest to public health for three main reasons, (i) possible escape from vaccines and appropriate antiviral drugs, (ii) because of the effect that it has epidemiologically on the way the virus spreads, and (iii) because of the effect that the variation has clinically, regarding the development of the disease in infected persons. Such considerations require a further layer of modeling. To that end, the other result of the study associated with the present paper and not previously described, was the use of Q-UEL and the Hyperbolic Dirac Net (HDN) approach [30,36,[42][43][44] approach to study the stages of the epidemic for a new variant, and the course of the disease in patients. Such graphs can be used in the manner of an epidemiologist's chain rule, i.e., the mortality rate for a population given prevalence and probability of exposure, probability of infection given exposure, and so on including conditional probabilities of symptoms, complications, and death. See Fig. 2, which reflects the probabilities and hence Q-UEL tags that are required as described in Ref. [30], to make maximum use of information by minimizing independency assumptions, and to avoid counting the same information more than once. Such an HDN of the simplest type is essentially a Bayes Net making use of Q-UEL algebra to construct a probabilistic knowledge graph that is a Bidirectional General Graph (BGG), i.e., bidirectional, and optionally including cyclic paths that can be solved without iteration [30]. A Bayes Net is, in contrast, a more restricted Directed Acyclic Graph (DAG) by definition. Such approaches, including Bayes Nets, imply use of logical AND between probabilities and the assumption of certain independencies, although in this study the method was extended to include logical OR to enable "Alternative Futures Analysis", see Fig. 2, that can be applied both to "what if" studies in responses to epidemics and the different possible outcomes for a patient exposed and then infected by COVID-19. This kind of inference net approach does depend on having epidemiological data for a new variant or to emerging diseases other than COVID diseases because such data is required to parametrize the required probabilities, although the above scoring measures for VOCs can be used to predict probabilities that can be tested in "what if" computer experiments. Note that decision trees, clinical pathways, and most epidemiological graphs usually indicate a flow from left to right, and so are flipped around compared with Bayes Nets and HDNs that follow the conditional probability notation P(A|B) = P("A←B") [30]. The convention suitable for the former is used in Fig. 2. Fortunately, the use of dual probabilities [30] makes this change in convention simple, though the one used should always be clearly stated. Mathematically, it is a matter of a sign convention: the choice here is equivelent to saying that we follow Eqn. (1) but take the complex conjugate * of the braket <A|B> = <B|A>*, i.e. change the sign of the imaginary part, but omit all the *, taking them as understood, for brevity. Such HDNs were initially calibrated for COVID-17 B.1.1.7 and related early variants and then adjusted for omicron discussed in the previous Section 4.3 though it did not include the new variants related to the omicron that arose while writing this paper, and for which details as to clinical effect were in some cases relatively sparse. For example, in response to the queries, this following tag was one of many tags for a variety of European counties that had at some stage 50% or more of B.1.1.7 amongst tested people, and also had association constants with the conditions of 3 or more factors. Fig. 2. A basic hyperbolic Dirac net for alternative futures analysis in the case of a patient exposed to a disease such as COVID-19. Probabilities used in this study are purely examples at this stage and represented an amalgam of information from various sources. Since shortly before submission BA.2 extensively replaced BA.1 omicron such that the probabilities appropriate to BA.1 are likely to be obsolete and misleading at the time of reading this, and the probability values discussed later below relate to data for the second peak of September 24, 2020 to March 28, 2021 for which more data is available. One reason for caution is that the Q-UEL knowledge-gathering techniques were valuable but also illustrated that it is dangerous to extract statistics of a previous wave to predict the statistics at the very start of a wave due to a new variant. Recall that COVID-19 alpha was becoming dominant around the beginning of January 2021, delta around May 2021, and omicron in early January 2022. Typically, pathogens are assumed to become milder in time due to natural selective pressure to survive better, and due to the deaths of less susceptible hosts over several generations. Also, growing experience in dealing with the disease shifts the spectrum of severity in the direction of less severe outcomes. Omicron patients had a 53% reduced risk of hospitalization and a 91% reduced risk of death compared with patients who had the delta variant (though of course the population mortality rate will increase if the incidence and prevalence of a wave is much higher than a previous wave). It was tempting that the early statistics for in-hospital fatalities from severe symptoms of COVID-19 in the first wave of early 2020 be considered as indicative of "serious covid" in the second, primarily alpha, wave. So, for example, knowledge capture indicated that in March 1 and May 11, 2020, the probability of dying of serious (in-hospital) COVID-19 if aged 50-59 was about 0.055 (but 0.135 if the patient had type 1 diabetes) in the first, which became the probability of serious symptoms for the second wave, and 0.25 for patients aged 70-79 (but 0.27 if the patient had type 1 diabetes), which similarly became the probabilities of serious COVID-19 in the following wave. The following is from semi-structured data in tabular form captured from a web page in the form of an XTRACT tag but simplified to a CTRACT tag, implying a degree of automated curation. However, in the early stages of the alpha B.1.1.7 variant the collected knowledge suggested it significantly increased the risk of hospitalization and the fatality-rate for patients aged 70 or less, but decreased the fatality rate for older patients. Consequently, for variants arising at this time and in the future, Fig. 2 is to be considered as a template irrespective of specific probabilities that should to be introduced as required. Some methods and examples for the use of this template are as follows. The principles of coherence (mutual consistency of probabilities) that should be considered are discussed in Ref. [30], Should this template need to be varied, Ref [30] also gives a step-by-step account of manual construction of small inference nets, though semi-automated [24] and automated [29] methods are usually used, except when Q-UEL is used in a programming language mode [36]. That is to say, except for an Expert System approach in which the human expert user enters the probabilities [36], but all still first require datamining of epidemiological sources to generate the tags with the required probabilities (and association constants). In Fig. 2, the state '?' is a state of observation or preparation of a probability, such that P(?) = 1, and a tag like <A|?> is analogous to a self or prior probability in a Bayes' Net. Coherence means establishing other reasonable values under the constraint that the probabilities used must be such that Bayes' Rule and normalization and marginal summation are satisfied, and that the sum of all paths from origin to terminal nodes to the right, is probability 1.0. By such means it is possible to fill many gaps in probability assignements and fully quantify such a graph, but it varies from country to country, variant to variant, and not least from patient type to patient according the conditioning factors such as ethnic and socioeconomic group, and also not least by genomic "molecular ethnicity", in ways that are as yet to be fully understood. Useful sources of data hit upon by the knowledge gathering methods include the European Surveillance System (TESSy) and GISAID database of the WHO Global Influenza Surveillance and Responses System (GISRS). The available data is in this case seen for the most part as clean by being available in several well-defined tabular formats, and in having no unknowns or ambiguities. For example, in the first calibration of the template, knowledge captured by the Q-UEL system indicated that among US counties with populations greater than 500,000 people, during the week ending June 13, 2020, the median estimate of the county level probability of a confirmed infection was 1 infection in 40,500 person contacts. Using the knowledge gathering techniques, it was found that if each person interacts with 50 people a day face to face contact, plus e.g., supermarket exposure. It assumed 2 weeks to show infection, the probability is of the order (1/40,500) x 50 × 14 = 0.0346. For COVID-19, data to that date suggest that 80% of infections are mild or asymptomatic, 15% are severe infection, requiring oxygen and 5% are critical infections, requiring ventilation. It was also found that a probability of 0.021 was reported for serious, hospitalized, discharged out of population. Probability 0.00071 was reported as prevalence of complications, 0.00048 was reported as cause specific mortality rate, and 0.062 was reported as case specific fatality rate. Conclusions The above paper sought to illustrate ways in which computers and the Internet can help combat emerging disease and described as "Results" some preliminary methods indicating directions in which computational tools might be further developed to meet that challenge. Such studies are still incomplete, and efforts by many workers will doubtless continue to be developed for several years, improved and finehoned by experience of their use in meeting hitherto unnoticed species or variants of pathogens. The approach by the present author is ultimately rooted in some mathematics that is not widely known, but there is as yet nothing to say that it is not a valid candidate and insightful example for providing the general kind of tools required. As stated in Theory Section 2.1, the focus is on what such tools need to do. The approach described above is to be seen only as an example of a way to achieve that. In developing a design approach based on bioinformatics, the appropriateness for practical development of diagnostics, vaccines, and peptidomimetics is constantly to be kept in mind. In this paper, there has been some large degree of emphasis on techniques most relevant to making use of peptide synthetic chemistry and laboratory immunology. This is simply because the focus of the initial papers was to some extent with peptide-based vaccines in mind, or somewhat similarly, epitopes inserted as loops into cloned proteins. It was the peptide approach that seemed the most modern, and that had been successful in other cases in the hands of the author and collaborators, as well as in the laboratories of many other workers and in veterinary medicine. However, these peptide-centric considerations are by no means outmoded, even in the light of RNA and DNA vaccines. Such peptide-based tools still have the benefit of focusing on and using only the parts that matter for the effect desired and run less risk of side-affects due to the presence of unnecessary material, such as hemagglutination or autoimmune responses in some patients. There is a considerable body of emerging literature on the risks of RNA and DNA which will be analyzed elsewhere: much seems alarmist, and some have been retracted. In view of the obvious lifesaving success of RNA and DNA vaccines, it might seem ungracious to consider these concerns. However, one cannot easily put aside that in scale, much larger than peptide methods and commensurate with killed or attenuated viruses as vaccines, the RNA and DNA vaccines are still big constructs, that act on complex systems inside human cells, and contain features that may not always be fully understood. There is even room for a more fundamental development of the peptide-based approach. One that could bring the response of bionanotechnology to the development of novel protein-like compounds interacting with target proteins [49] is shown in Fig. 3, reproduced from the sister journal [14], and using the technology described by the present author and colleagues in Refs. [50][51][52][53][54]. In essence, this reflect-complement-reflect method requires synthesizing a viral or host protein or protein domain target using D-amino acids (the first reflect step), attaching that to a L-amino acid protein carrier to raise antibodies (the complement or wet-lab-fit-to-binding-site step), and synthesizing nanobodies (here meaning antibody heads) out of D-amino acids (the final reflect step) to interact with the original protein target. Peptides and proteins made entirely from D-amino acid residues fold up in space and function in mirror image to their L-amino acid counterparts, but the resulting complex structure cannot be considered biological, and is not subject to rapid proteolysis in the patient (though ultimately degraded intracellularly). This present paper also touched upon, and illustrated, the more general kinds of development by the author and collaborators in the areas on knowledge management and AI which are in progress with the author and collaborators, illustrated by describing their application in the COVID-19 pandemic. In closing summary, the major finding has been, not unexpectedly, that access to fullest possible knowledge of emerging and previous epidemics, gathered from different places and different times, is extremely useful in rapid defense against emerging disease. It is also important to update this dynamically, as real-world data in real time. However, integration of knowledge from diverse sources has not been a strong feature of response to emerging epidemics in the past, and efforts like these described here are required. Some would argue that the world's response could have been faster in the case of COVID-19 [6], and if so, we should learn from it. Coronaviruses may not be the pathogens involved in next pandemic. Concerns arise constantly. For example, in preparing an early draft version of the present paper, a warning sign for avian influenza in Barkby, Leicestershire, UK, was photographed on Sunday, December 12, 2021, e.g. Ref [55], apparently the result of transmission from chickens to a single farmer, that could involve a new strain. The patient was isolated, the WHO informed, and doubtless the viral RNA has already been sequenced. By late February 2022 there has been no further news but should this event have emerged as the seed for a new epidemic, an official, clear, well-advertised reference to finding the sequence quickly on GenBank and gathering all relevant knowledge would have been the important first steps to realize and facilitate the kind of approaches described here. At the time of submitting this final version, there is also an emergence of what appears to be a new form of viral hepatitis, reversing the COVID-19 story by focusing on children as the most affected part of the population. And then yet again, at the time of doing the galley proofs of this paper, there is the rise of "monkey pox" in the human population. There seems to be a relentless progression of potential new pandemics that makes the previous decades almost seem like a lull. But throughout, the argument remains the same: gathering and bringing together new knowledge from all sources to tackle new emergent diseases remains imperative. Declaration of competing interest The authors declare the following financial interests/personal relationships which may be considered as potential competing interests: Barry Robson reports a relationship with Ingine Inc. that includes: equity or stocks. The author is an Assistant Editor of this journal -Barry Robson.
2022-05-20T13:12:31.784Z
2022-05-01T00:00:00.000
{ "year": 2022, "sha1": "e58397f400ac9f563105ed08168b11f4b6dad85d", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.imu.2022.100966", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "91804c49af41a651558ed52092d889b0fbfafac6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
2369876
pes2o/s2orc
v3-fos-license
Nuclear Localization of CXCR4 Determines Prognosis for Colorectal Cancer Patients Chemokines and their receptors are implicated in formation of colorectal cancer metastases. Especially CXCR4 is an important factor, determining migration, invasiveness, metastasis and proliferation of colorectal cancer cells. Object of this study was to determine expression of CXCR4 in tumor tissue of colorectal cancer patients and associate CXCR4 expression levels to clinicopathological parameters. Levels of CXCR4 expression of a random cohort of patients, who underwent primary curative resection of a colorectal carcinoma, were retrospectively determined by quantitative real-time RT-PCR and semi-quantitative analyses of immunohistochemical stained paraffin sections. Expression levels were associated to clinicopathological parameters. Using RT-PCR we found that a high expression of CXCR4 in the primary tumor was an independent prognostic factor for a poor disease free survival (p = 0.03, HR: 2.0, CI = 1.1–3.7). Immunohistochemical staining showed that nuclear distribution of CXCR4 in the tumor cells was inversely associated with disease free and overall survival (p = 0.04, HR: 2.6, CI = 1.0–6.2), while expression in the cytoplasm was not associated with prognosis. In conclusion, our study showed that a high expression of nuclear localized CXCR4 in tumor cells is an independent predictor for poor survival for colorectal cancer patients. Introduction More than 100 years after postulation of the "seed and soil" theory, the precise mechanisms determining the directional migration and invasion of diminished cancer cells into specific organs remain to be established [1,2]. Recent studies increasingly show that chemokines and their receptors are an important factor in this process of organ selective metastasis [3]. Chemokines are small signaling cytokines that act as chemoattractants through interaction with G-proteincoupled, seven transmembrane domain receptors [4,5]. They are the major regulators of cell trafficking and adhesion. Specific chemokines are produced and released by target organs that attract tumor cells with specific corresponding receptors, resulting in site/organ specific cancer cell migration and formation of metastasis. This migration signaling mechanism is supported by studies in cancer models, demonstrating that malignant cells can target specific organs or tissues by selected chemokine receptor-ligand interaction [6][7][8][9][10]. Accordingly, neutralization of CXCL12-CXCR4 interaction leads to a marked inhibition of metastasis in tumor animal models [6,11,12]. Muller et al. were the first to implicate a key role for CXCR4-CXCL12 in the organ specific metastasis of breast cancer [6]. Thereafter, numerous authors have reported on the involvement of CXCR4-CXCL12 in promoting the metastatic homing of different types of tumor cells, including colorectal cancer [10,[13][14][15][16]. CXCR4 is expressed in intestinal cells and over-expressed in colorectal tumor cells [16][17][18]. It is activated upon binding with its ligand CXCL12 also known as stromal cell-derived factor (SDF-1), triggering cell adhesion, directional migration and proliferation of tumor cells [6]. CXCL12 is normally produced by stromal cells of lymph nodes, lung, liver and bone marrow. These are the most frequent sites for colorectal cancer metastases [19]. At the moment only the TNM classification is used to stage patients with colorectal cancer. New prognostic biomarkers are required to improve staging of colorectal cancer patients and thereby resulting in better selection of patients that might benefit from (adjuvant) therapy. Many studies have demonstrated an important association between CXCR4 expression and clinical prognosis of patients with various types of cancer [3,13,14,[20][21][22][23]. In our study, we retrospectively determined the level of expression and cellular distribution of CXCR4 in association with clinical, pathological and prognostic parameters in tumor tissue of a random selected cohort of colorectal cancer patients, using RT-PCR and immunohistochemical techniques. This study focuses whether CXCR4 might function as a biomarker to improve the current staging of colorectal cancer patients. Material and Methods Tumor Specimens for RT-PCR RNA from snap-frozen tumor samples, containing at least 60% tumor cells as determined by a pathologist (HM), of 70 curatively operated colorectal cancer patients was isolated using RNeasy columns (Qiagen, USA). All patient materials were obtained with approval of local medical ethic committee. Patients were operated between 1990 and 2001, at the time of censoring 41 (59%) had died of whom 22 (54%) died from their disease, and 29 patients were still alive; four of them were alive with recurrence of the tumor. Mean follow up was 99 months (range 50-172 months). Patients with stage I/II (n=47) and stage III (n=23) colorectal cancer (as defined by the American Joint committee on Cancer and Union Internationale Contre le Cancer-criteria) were selected for this study. RT-PCR of CXCR4 in a Patient Cohort PCR primers for the detection of CXCR4 and the housekeeping genes (heterogeneous nuclear ribonucleoprotein M (HNRPM) and TATA box binding protein (TBP) were designed in PRIMER Express (Applied Biosystems, USA) and span at least one exon-exon boundary. The primers used were: HNRPM, 5'-GAGGCCATGCTCCTGGG-3', 5'-TTTAGCATCTTCCATGTGAAATCG-3', TBP, 5'-CACGAACCACGGCACTGAT-3', 5'-TTTTCTTGCTGC CAGTCTGGAC-3' and CXCR4 5'-TTCTACCCCAAT GACTTGTG-3'-5'-ATGTAGTAAGGCAGCCAACA-3'. RT-PCR reactions were performed on an ABI Prism 7900ht (Applied Biosystems) using the SybrGreen RT-PCR corekit (Eurogentec, Belgium). Cycle conditions were 10 min at 95°C followed by 40 cycles of 10 s at 95°C and 1 min at 60°C. Cycle threshold extraction was performed using the SDS software (version 2.2.2, Applied Biosystems). For all PCR reactions, a standard curve was generated using a fivestep, five-fold dilution of pooled cDNA from the HCT81 colorectal cancer cell line. Relative concentrations of mRNA for each gene were calculated from the standard curve. After RT-PCR, dissociation curves were made to check the quality of the reaction. Reactions with more than one peak in the dissociation curve were discarded. For normalization, the expression values for each gene were divided by the normalization factor of the gene (the average of the two house keeping genes). Immunohistochemistry of CXCR4 in a Patient Cohort A tissue microarray (TMA) was constructed from formalinfixed, paraffin-embedded tissues from 58 curatively operated colorectal cancer patients as described previously [24]. Standard three-step, indirect immunohistochemistry was performed on 4-μm tissue sections transferred to glass slides using a tape-transfer system (Instrumedics, USA), including citrate antigen retrieval and blockage of endogenous peroxidase. Sections were overnight incubated with the primary antibody CXCR4 (Mouse-anti-Human CXCR4 IgG 2B , clone MAB172, R&D Systems, USA). Secondary reagent used was biotinylated rabbit anti-mouse IgG antibodies (DAKO Cytomation, Denmark) and biotinylated-peroxidase streptavidin complex (SABC; DAKO Cytomation, Denmark). Microscopic analysis was assessed by two independent observers (F.M.S. and C.J. K.) in a double-blinded manner. Three different punches per patient were scored. Cytoplasmic and nuclear intensity of CXCR4 staining were separately scored in two categories: low (0) or strong (1). Since the envelope of all nuclei of all tumors was stained, nuclear intensity was determined on the degree of staining of the nucleoplasm. Where discrepancies arose between the staining of scores from the same tumor, an average of the scores was taken, with confirmation by two observers using a double-headed microscope with a consensus decision taken in all cases. Tissue stromal cells, normal epithelium or lymph follicles served as positive internal controls to ascertain the quality of the staining. To distinct microsatellite instable (MSI) from microsatellite stable (MSS) tumors, the TMA was stained for mismatch repair proteins MLH1 and PMS2, as previously described [25]. MLH1 and PMS2 are deficient in sporadic MSI tumors. Therefore, the expression of these proteins was used to differentiate MSI and MSS rectal cancers. Tissue stromal cells, normal epithelium or lymph follicles served as positive internal controls when analyzing MLH1, PMS2 expression. The expression of MLH1 and PMS2 was scored positive if tumor cells showed expression, and negative if tumor cells showed no expression of either MLH1 or PMS2, provided that tissue stromal cells did show expression, indicating MSS and MSI tumors, respectively [26]. Statistical Analysis All analyses were performed with SPSS statistical software (version 12.0 for Windows, SPSS Inc, Chicago, USA). Mann-Whitney U test (M-W) was used to compare variables. Kaplan-Meier analyses were performed to analyze patient survival. The entry date for the survival analyses was the time of surgery of the primary tumor. Events for disease free survival and overall survival were defined as follows: from time of surgery to time of disease relapse or death by any cause (for disease free survival) and time of death by any cause (for overall survival), respectively. Patients were first separately analyzed in univariate analysis in addition, variables with a P value of <0.10 in the univariate analyses were subjected to a multivariate analysis. Cox' regression analyses were used to calculate hazard ratios (HR) with 95% confidence intervals (CI). Low Levels of CXCR4 RNA Expression Predict Good Prognosis The RNA level of CXCR4 was determined in primary tumor tissue of a cohort of 70 colorectal cancer patients using quantitative RT-PCR and linked to clinical follow-up data. The impact of high versus low expression of CXCR4 was assessed using the 50 th percentile cut-off point as previously defined [10,14]. The characteristics of the cohort colorectal cancer patients, included in this study are summarized in Table 1. To evaluate whether CXCR4 and clinicopathological features were associated, the level of CXCR4 was correlated to each feature. CXCR4 expression was not associated with any of the clinicopathological variables (Table 1). Univariate cox regression analyses were performed to identify prognostic factors for disease free survival and overall survival (Table 1). Advanced patient age (p=0.006; p=0.005), TNM stage (p<0.001; p<0.001), and high CXCR4 expression (p=0.006; p=0.01) proved to be significant predictors for poor disease free and overall survival respectively, using univariate analyses ( Table 1). The Kaplan-Meier curve for disease free survival plotting high versus low expression of CXCR4 is shown in Fig. 1. High expression of CXCR4 retained its strength as independent predictor of decreased prognosis in disease free survival (HR: 2.0, p=0.03; Table 1). Also, TNM stage (HR: 2.9, p=0.001; HR: 3.1, p=0.001) retained its strength as independent predictors for disease free and overall survival, while patient age (HR: 2.0, p<0.05) was found to be an independent predictor only for overall survival. Our RT-PCR results showed that high expression of CXCR4 is independently associated with poor disease free survival for colorectal cancer patients. Nuclear Localization of CXCR4 Determines Prognosis for Colorectal Cancer Patients Using immunohistochemistry a TMA of 58 colorectal tumors was stained for CXCR4. We observed immunoreactivity for CXCR4 in the cytoplasm, cell membrane and nucleus of normal and tumor intestinal epithelial cells (Fig. 2). For prognostic purpose only CXCR4 expression in the cancer epithelium was scored. Cytoplasmic staining and nuclear staining were semi-quantitative analyzed, according to previous publications [20]. For cytoplasmic CXCR4 staining 22 (38%) tumors were classified as weak and 36 as strong (62%). For nuclear CXCR4 staining 15 tumors were classified as low (26%) and 43 were strong (74%). No correlation was found between nuclear and cytoplasmic expression of CXCR4. Also no correlation was found between level of CXCR4 mRNA and either nuclear or cytoplasmic expression of CXCR4 as determined by immunohistochemical techniques. Association of cytoplasmic CXCR4 expression to clinicopathological and survival parameters did not reveal any significant correlation. In contrast to cytoplasmic localized CXCR4, nuclear localized CXCR4 was found to be a significant predictor for survival. Using univariate cox regression analyses, we showed that strong expression of CXCR4 was significantly (p=0.03) associated with decreased overall survival compared to patients with weak nuclear expression of CXCR4. Patient characteristics and several markers that have an effect on disease free survival and overall survival in colorectal cancer showed no significant association with level of CXCR4 (Table 2). In addition, patient age (p=0.008, p= 0.006) and TNM stage (p=0.002, p=0.002) were found to be significant predictors for disease free survival and overall survival respectively ( Table 2). Using cox multivariate analysis, strong expression of CXCR4 (HR: 2.6, p= 0.04; HR: 3.7, p=0.02) retained its strength as independent predictor for both poor disease free survival and overall survival, together with TNM stage (HR: 2.9, p=0.003; HR: 3.3, p=0.002) and median age (HR: 2.5, p=0.01; HR: 2.8, p=0.008; Table 2). Semi-quantitative analysis of immunohistochemical staining associated to survival showed that strong nuclear localization was associated with poor prognosis for colorectal cancer patients. Discussion The expression of CXCR4 has been detected in a large number of different types of cancers, together with its use as prognostic biomarker [3,27]. In the present study we evaluated the expression of CXCR4 in colorectal cancer by quantitative RT-PCR and immunohistochemical staining. Strong expression of nuclear localized CXCR4 and high RNA levels of CXCR4 were both independent significant predictors for poor overall and disease free survival. Our results were consistent with others' recent RT-PCR data [10,15]. We found no correlation between expression of CXCR4 mRNA (RT-PCR) and nuclear CXCR4 expression (immunohistochemistry). This might be explained that level of CXCR4 mRNA does not distinct between levels of membrane expressed CXCR4 protein and nuclear expressed CXCR4. Also does RNA isolated from tumor samples, includes RNA from cells other than tumor cells, for instance tumor infiltrated T cells. Tumor infiltrated T cells also express CXCR4 [28,29] and presence is positively associated with prognosis of colorectal cancer patients [20][21][22][23]. As a result tumor infiltrated T cells might disturb prognostic evaluation of CXCR4 mRNA expression isolated from tumor tissues by quantitative RT-PCR. Therefore we additionally used immunohistochemical techniques to semi-quantitatively assess expression of CXCR4 in tumor cells only. Although RT-PCR is a better technique to quantify level of expression, the use of immunohistochemical techniques for clinical and prognostic purposes is preferred above RT-PCR, since the intratumoral and intracellular distribution of CXCR4 can be determined which is not possible using RT-PCR. For prognostic purposes we showed that only nuclear localization of CXCR4 was independently predictive for prognosis of colorectal cancer patients in contrast to expression in the cytoplasm. Using immunohistochemical staining to semiquantitatively score nuclear and cytoplasmic expression of CXCR4 and associating results to survival parameters, has been done in various types of tumors amongst others in a large panel of breast carcinomata [20][21][22][23]. To our knowledge, only two studies determined the association between colorectal cancer and prognosis, using immunohistochemical techniques [13,15]. These studies only detected cytoplasmic and sometimes membrane staining, while no nuclear staining was separately investigated in both studies. We observed expression of CXCR4 both in the cytoplasm and nucleus of colorectal cancer tissue and though rarely, membrane expression. Our study is the first that was able to distinguish nuclear from cytoplasmic CXCR4 expression in colorectal cancer. A possible explanation for this fact might be that we used a different antibody compared with previous studies. Shim et al. showed in cultured cells that CXCL12 ligand binding to CXCR4 induced translocation of CXCR4 to the cytoplasm and to the nucleus of cells [30]. The translocation of CXCR4 to the nucleus might be involved in biological processes and function as a transcription factor as has been described for other receptors, for instance the epidermal growth factor receptor (EGFR) [30,31]. Recently for lung tumors it has been shown that CXCL12 activates CXCR4 receptor and ERK pathway, which in turn induces IKKa/b phosphorylation, p65 Ser536 phosphorylation, and NF-kB activation, which leads to b1 and b3 integrins expression and increases the migration of human lung cancer cells [32]. Since our data imply that especially nuclear staining predicts prognosis, additional research should provide insight in the nuclear function of CXCR4 in colorectal cancer. Moreover if CXCR4 is a transcription or has another specific function in the nucleus it is important to learn which genes are activated or inhibited by CXCR4 in colorectal cancer cells. Currently, only the TNM staging is used to stage patients with colorectal cancer. Adjuvant treatment is based on this staging. Combining TNM staging with selected biomarkers might better define patients who are at risk for metastases or recurrences and might define patients who would benefit from adjuvant treatment. In conclusion, our data showed that especially nuclear localized CXCR4 determines prognosis for colorectal patients. The use of CXCR4 might improve the current staging of colorectal cancer patients.
2014-10-01T00:00:00.000Z
2008-12-11T00:00:00.000
{ "year": 2008, "sha1": "29a7696e27b1fa2afa7a6120d19337d1c5e422bb", "oa_license": "CCBYNC", "oa_url": "https://link.springer.com/content/pdf/10.1007/s12307-008-0016-1.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "29a7696e27b1fa2afa7a6120d19337d1c5e422bb", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
2809054
pes2o/s2orc
v3-fos-license
The Drosophila Helicase MLE Targets Hairpin Structures in Genomic Transcripts RNA hairpins are a common type of secondary structures that play a role in every aspect of RNA biochemistry including RNA editing, mRNA stability, localization and translation of transcripts, and in the activation of the RNA interference (RNAi) and microRNA (miRNA) pathways. Participation in these functions often requires restructuring the RNA molecules by the association of single-strand (ss) RNA-binding proteins or by the action of helicases. The Drosophila MLE helicase has long been identified as a member of the MSL complex responsible for dosage compensation. The complex includes one of two long non-coding RNAs and MLE was shown to remodel the roX RNA hairpin structures in order to initiate assembly of the complex. Here we report that this function of MLE may apply to the hairpins present in the primary RNA transcripts that generate the small molecules responsible for RNA interference. Using stocks from the Transgenic RNAi Project and the Vienna Drosophila Research Center, we show that MLE specifically targets hairpin RNAs at their site of transcription. The association of MLE at these sites is independent of sequence and chromosome location. We use two functional assays to test the biological relevance of this association and determine that MLE participates in the RNAi pathway. Introduction RNA hairpins are secondary structures formed by double-stranded (dsRNA) regions, known as stems, with the paired strands connected by a terminal loop. Hairpins can display a high level of heterogeneity in stem length, loop size, the number of bulges or internal loops present in the stem, and their thermodynamic properties [1]. Their function in the activation of the RNA interference (RNAi) and the microRNA (miRNA) pathways is well characterized [2]. However hairpin formation is also required in a broad spectrum of gene-expression regulatory mechanisms, including RNA editing, mRNA stability, and the specific subcellular localization of transcripts and their translation [3,4]. RNA folding can occur co-transcriptionally and transcriptional features such as pausing and elongation rate can shape this process [5][6][7]. A significant number of mRNAs undergo some sort of secondary structure formation in vivo, and cells restructure most of them through energetically driven processes [8]. Even when required to perform a specific function, the hairpins eventually need to be resolved to allow the formation of a functional RNA. In certain circumstances, the presence of hairpins in protein-coding transcripts can be harmful to the cell. For example, the tendency to form stable RNA hairpins is implicated in the pathogenesis of neurological disorders associated with trinucleotide repeats expansion [9,10]. The remodeling of RNA hairpins is achieved by association with singlestranded RNA (ssRNA) binding proteins, or by the action of helicases [11]. Helicases are ubiquitous enzymes that participate in all of the steps related to nucleic acid metabolism. Maleless (MLE), a Drosophila helicase, exhibits single-stranded RNA or DNA binding activities and is an RNA:DNA helicase/adenosine triphosphatase (ATPase) in vitro [12]. Orthologs of MLE, which include human RNA helicase A (RHA/DXH9), belong to the DEXH RNA helicase subfamily and are characterized by an additional domain implicated in dsRNA binding. Two specific in vivo functions of MLE have been reported. In the first, a mutant allele of mle, mle napts , results in a paralytic phenotype due to a splicing abnormality of the para mRNA in a portion of the transcript subject to RNA editing [13]. A suggested explanation of this phenotype is that the mutant MLE helicase fails to properly unwind the RNA secondary structure targeted by the ADAR enzyme (adenosine deaminase acting on RNA), compromising the access to the splicing sites retained in the hairpin. MLE's RNA unwinding activity is also essential for its function in dosage compensation [14], where it is required for remodeling the RNA on the X (roX) RNAs' loop structures in order to facilitate the assembly of the MSL complex [15,16]. Recently, we have obtained evidence that MLE is involved in a large number of additional regulatory steps and pathways involved in nucleic acid metabolism [17]. The ability of the MLE helicase to interact with RNA polymerase II (RNAPII) [18], and its presence at multiple actively transcribing sites on polytene chromosomes [19,20], led us to hypothesize a more general role for MLE in resolving co-transcriptionally-generated RNA hairpins. To test this hypothesis, we analyzed Drosophila lines from the Transgenic RNAi Project (TRiP) and the Vienna Drosophila Research Center (VDRC); these lines carry inducible transgenes that express hairpin RNAs specific to individual coding genes distributed across the Drosophila genome [21][22][23][24]. We determined that MLE is specifically enriched at the sites of RNAi transgene transcription, and that this association is independent of hairpin size or genomic location. Parallel functional assays establish that the MLE helicase is required for functional RNA interference in vivo. Our results suggest that MLE may play a broad and significant role in the structure-function relation of regulatory RNAs during development and differentiation. In addition, our results have direct relevance to mammalian RNAi. The mammalian ortholog of MLE, RHA/DHX9, was shown to play a role in RNA interference [25,26]. This conclusion has been challenged [27]. Given that MLE and RHA are 49% identical and 86% similar, our demonstration using a genetic approach, that MLE participates in the process of RNA interference, provides support for the original conclusion that RHA plays a role in this pathway in mammals. MLE localizes at sites of dsRNA transcription We initially analyzed MLE's general ability to target hairpin RNAs by using RNAi stocks from the Harvard TRiP collection (flyrnai.org). These studies used an Hrb87F RNAi stock containing an integrated pValium1 plasmid expressing a dsRNA under the control of an inducible UAS-promoter [21]. MLE staining of male polytene chromosomes, in addition to the usual pattern on the X-chromosome and on various autosomal interbands, revealed a bright signal at the integration site of the Hrb87F RNAi plasmid (chromosome 3L 68A4 on the cytological map) only when its transcription was activated by an Actin5C-Gal4 driver carried on the second chromosome (Fig 1). MLE was always present at this site (22 nuclei from 3 different larvae were analyzed). Polytene chromosome staining with MSL1, MSL3 or MOF antibodies failed to show a similar localization (at least 10 nuclei from 3 different larvae were analyzed) (Fig 2A and S1 Fig). Therefore MLE's enrichment appears to be transcription dependent but independent of the MSL complex, this conclusion is further confirmed by the fact that MLE signal is present in all the female polytene chromosomes examined (37 nuclei from 5 larvae) at levels comparable to those observed in males ( Fig 2B). Curiously, we also detected an enrichment of MLE at the telomere of the right arm of chromosome 3 in the female larvae analyzed but not in male larvae (S2 Fig). Hrb87F null mutants are viable but display abnormally elongated telomeres [28]; therefore, we speculate that the enrichment could be due to telomeric retrotransposon transcription, possibly correlated to the developmental stage of the larvae. To our knowledge, this type of targeting has never been reported in the literature and we never observed it in any of our other experiments. This aspect, although interesting, lies outside the focus of the present work. To rule out the formal possibility that the chromosome 2 Gal4 activator used was responsible for the observed effects on MLE localization, we crossed the UAS-Hrb87F RNAi flies with flies carrying an Actin5C-Gal4 transgene on the third chromosome. The progeny of this cross also showed a clear localization of MLE at the plasmid integration site (S3 Fig). Surprisingly, no signal on the 3R telomere was detected in these larvae. MLE enrichment at sites of dsRNA transcription is sequence and chromosome location independent In order to determine whether the MLE localization is sequence specific, we tested MLE localization in the background of two additional TRiP pValium1 lines, one expressing a dsRNA targeting a different portion of the Hrb87F gene and one targeting mof, the gene that encodes the histone acetyl transferase present in the MSL complex in males, but that is also found in both sexes [29]. To avoid any ambiguity due to MLE's function in dosage compensation, we restricted these analyses to female larvae. As can be seen in Fig 3A, MLE is highly enriched at the plasmid integration site in both the second Hrb87F RNAi line and the mof RNAi line, indicating that its localization is independent of the sequence of the dsRNA being transcribed. Moreover, the successful MOF knock down (S4 Fig) excludes a role of this protein in MLE localization. We also tested if any sequence included in the inserted transgene, other than the dsRNA, was recruiting MLE. In the pValium1 lines, the transcriptional unit under the control of the UAS-promoter contains an intron of the white gene between the two inverted repeats. To investigate whether MLE specifically targeted the RNA sequence of this intron, we took advantage of the TRiP lines transformed with pValium10 [22]. This plasmid is integrated in the same genomic site as pValium1 but contains several unique features including a fushi tarazu gene intron replacing the white gene intron. We tested three different lines and all three of them recruited MLE (S5 Fig), arguing that MLE is not recruited by transcribed sequences in the white intron. pValium plasmids contain the vermillion gene as a selectable marker and the attP landing site is flanked by a yellow gene. Because the transcriptional status of these genes does not vary with the presence of a GAL4 activator we considered rather unlikely that they may be responsible for the MLE enrichment. Nevertheless, to test this possibility, we used two lines from the VDRC RNAi collection UAS-Jil1 dsRNA and UAS-Hp1 dsRNA, in which the transgenes have been inserted into different sites in the genome using different targeting techniques. The Hp1 dsRNA construct is randomly integrated in the genome via a P-element-mediated germ line transformation; it does not contain either the vermillion or the yellow gene and uses the white gene as a selectable marker. The Jil1 dsRNA construct is inserted in chromosome 2L band 30B3 via site-specific recombination; it does not contain the vermillion gene, the white gene is used as a marker and the landing site is flanked by a yellow gene. In order to map the genomic integration sites for the Hp1 and Jil1 RNAi transgenes and correlate them with the association of MLE, we stained polytene chromosomes bearing these transgenes in combination with the Actin5C-GAL4 driver with a GAL4 antiserum. We used the DAPI pattern to precisely map the MLE and GAL4 signals and determined that MLE is enriched at the integration site of the plasmids when transcription is activated (Fig 3B). MLE specifically targets hairpin RNAs at their site of transcription MLE is recruited to a pool of highly expressed developmentally regulated genes [20]; therefore, the observed MLE enrichment could be due to the high level of transcription caused by GAL4 activation, rather than by the features of the transcript. To test this possibility we used two control stocks from the TRiP collection that express the luciferase gene in the presence of GAL4; the luciferase genes, inserted either in a pValium1 or a pValium10 plasmid, lack a hairpin-generating sequence. In this case, although MLE is seen at a few sites on the polytene chromosomes, we did not observe any binding at the integration site of the plasmid (Fig 4A), confirming that this enzyme is specifically recruited at sites of hairpin RNA synthesis. The level of luciferase gene expression was verified by luciferase assay and qRT-PCR (Fig 4B and 4C). We also tested a stock expressing a dsRNA against the luciferase gene and observed the presence of a strong MLE signal at the site of hairpin transcription. This result excludes the possibility that the absence of MLE binding to the overexpressed luciferase gene is due to its inability to recognize that specific gene sequence; rather it is due to the absence of a hairpin in the transcript (Fig 4A). All the TRiP and VDRC RNAi lines tested so far expressed long hairpin RNAs (lhRNAs) with stems ranging from 331 to 570 bp (S1 Table). We next asked if the size of the dsRNA was a critical feature for MLE recruitment. We analyzed three different TRiP lines that express short hairpin RNAs (shRNAs) with stems of 21 nucleotides [23]. We found that MLE was present in all of them at the site of the RNAi transgene although the enrichment was not as robust as in the lines expressing lhRNAs (Fig 5). This difference may be due to a weaker affinity of MLE for short hairpin RNAs or to a faster release of the helicase from a short target. It is possible that MLE is recruited to the integration site of the plasmid by a DNA alternative structure formed during the inverted repeat's transcription and not by the architecture of the transcript. Therefore, in order to establish the RNA-specific nature of the binding, we incubated polytene chromosomes with RNase A. We chose to perform this experiment on male larvae in order to use the RNA dependent localization of MLE on the X chromosome as a positive control for the RNase treatment. As previously reported [30], cell permeabilization steps necessary to introduce RNAse into unfixed cells partially destabilize MLE binding to the X chromosome, but they do not compromise MLE binding to the integration site of the plasmid (Fig 6). Following RNase treatment, MLE is strongly reduced at the hairpin RNA transcription site and is completely released from the rest of the X chromatin. Co-staining of MLE and MSL1, a protein not affected by RNase treatment, confirms the specific effect of RNase on MLE binding (S6 Fig). This result indicates that MLE binding at the site of hairpin RNA synthesis, while requiring the presence of RNA, is structurally different from its binding to the X chromosome. A parallel treatment with RNase III, which specifically cleaves dsRNA, did not result in a measurable effect on MLE localization (S7 Fig). However due to the lack of a positive control we are not able to discriminate between a resistance to the treatment and a technical issue. To provide further evidence that MLE physically binds to RNA hairpins, we performed an RNA immunoprecipitation (RIP) experiment followed by a qRT-PCR reaction. Extracts from larvae expressing Hrb87F lhRNA were used for immunoprecipitation with either anti-MLE or generic IgG antibodies. The immunoprecipitated RNA was isolated and quantified by genespecific qRT-PCR with IgG as a negative control. In this reaction, the cDNA was obtained using a primer targeting the loop structure formed by the white intron and it was then amplified with two different primers sets targeting the stem region (Fig 7). Hrb87F hairpin was found reproducibly enriched in MLE-associated RNAs over IgG-associated RNAs. While this experiment demonstrates that MLE binds the RNA in question, specificity was not assessed. MLE participates in the RNAi pathway It is possible that a highly expressed hairpin RNA attracts a broad range of RNA binding proteins including RNA helicases in a non-specific manner. To address this possibility, we tested the recruitment of the Rm62 RNA helicase to the integration site of the plasmid. Like MLE, Rm62 is a member of the DExD/H subfamily of helicases; it localizes at actively transcribed genes, such as developmentally regulated puffs, and is implicated in multiple biological processes among which chromatin insulation, RNA export, splicing and transcriptional repression [31][32][33]. More importantly, it takes part in the RNAi pathway [34,35]. Furthermore the human homologue of Rm62, DDX17, binds pri-miRNAs stem-loop structures [36]. Polytenes staining from larvae expressing Hrb87F hairpin RNA did not show any binding of Rm62 helicase at the integration site of the plasmid, although the protein is present at other sites such as developmental puffs (Fig 8). A co-staining experiment revealed that MLE and Rm62 co-localize on certain puffs and sites; however this was never the case for the induced hairpin transcription site (S8 Fig). A weak background signal of Rm62 at the integration site of the plasmid, present in the co-stained polytene chromosomes but not observed in the chromosomes stained only with Rm62, is probably due to bleed-through of the strong MLE signal or to an antibody cross-reaction. The above results indicate that the hairpin transcription is specifically recruiting MLE rather than generically attracting RNA helicases. The ability of MLE to target hairpin RNAs at their sites of transcription and our published observations that MLE interacts with the components of the RNAi machinery, Argonaute 2 and Dicer-2 [17], led us to investigate the possibility that MLE is a required element of the RNAi pathway. To this end, we compared the efficiency of a Notch RNAi knockdown in a wild type vs. mle mutant background using the recessive null allele mle 1 . Notch dsRNA expression in the wing margins, induced by a C96-Gal4 driver, leads to a broad range of adult phenotypes, giving rise to 6 classes, from the absence of a few margin bristles and mild notching, to a complete lack of margins and profound reduction in size of the wing blade [22,23]. In a wild type background female population raised at 25°C, C96-driven expression of the Notch RNAi resulted in a significant reduction of the wing margin (class 5) in nearly 100% of the 260 wings examined, with the remaining wings showing an even more severe phenotype (class 6) (Fig 9A). This result is concordant with those reported by Ni et al., 2011 [23]. Heterozygosity for the mle 1 allele had little effect on Notch-RNAi induced wing phenotypes; the 125 wings scored exhibit a phenotypic distribution that closely resembles the distribution previously observed in the wild type background (Fig 9A), suggesting that RNAi efficiency is not profoundly impaired by halving the genetic dose of MLE. Approximately 10% of the wings retained most of the wing margin (class 4), perhaps due to a mild mle 1 /+ heterozygote effect or to some unknown influence of the genetic background. Albeit small, this difference is nevertheless significant (Chi-square value = 29, p value <0.001). Critically, in an mle 1 /mle 1 mutant background we observed a substantially less severe Notch RNAi phenotype: all 115 wings analyzed retain most of their margins, with 70% of them exhibiting extensive notching (class 4) and approximately 30% showing mild or no notching (class 2 and 3). In this case, as well, the difference is highly significant (Chi-square value = 358, p value <0.001). To test whether the effect of MLE on the wing phenotype induced by Notch RNAi was due to the specific mle allele used, females with an mle 1 /mle Y203 heteroallelic combination were obtained. Here again the absence of MLE leads to an improvement in wing development (S9 Fig) that is similar to the improvement obtained with the homozygous mle 1 allele (Chi-square value = 0.68, p value = 0.71). These results suggest that MLE normally enhances the efficiency of the RNA machinery. In order to demonstrate that this effect is reproducible with other dsRNAs, we performed a similar experiment by driving Egfr (epidermal growth factor receptor) dsRNA synthesis in the wing with the ms1096-GAL4 driver. Egfr is involved in wing vein development [37], and its knockdown in the developing wing with ms1096-GAL4 leads to profound adult vein defects. By classifying the observed vein phenotypes according to the number of veins affected we defined the 7 classes listed in Fig 9B. In a wild type background more than 80% of the ms1096-Egfr RNAi wings examined (n = 92) were concentrated in the 3 classes with the most severe phenotypes (classes 5, 6 and 7) and no wings in class 2 (the least severe phenotype) were detected. In contrast, in an mle 1 /mle 1 mutant background, more than 80% of the ms1096-Egfr RNAi wings (n = 86) fall in the 3 mildest phenotypic classes (classes 2, 3 and 4); no wings belonging to class 7 were found. The observed effect of the absence of MLE on the Egfr RNAi phenotype is significant (Chi-square value = 88, p value <0.001). Discussion MLE is a Drosophila helicase required for dosage compensation in males. In vitro, it was shown to unwind short double-stranded RNA or RNA/DNA substrates [12]. An early indication of its in vivo RNA remodeling properties was offered by the phenotype of the mle napts allele, which suggested that MLE may play a role in resolving loop structures in the para gene transcript [13]. Such a function was validated by the demonstration that MLE remodels the roX RNA hairpins during the assembly of the MSL dosage compensation complex [15,16]. As MLE has been recently implicated in a variety of regulatory steps and pathways [17], we asked whether it might have a broader role and participate in other pathways that involve the biogenesis or function of hairpin RNAs. To this end, we have carried out a series of cytological, molecular and genetic investigations, and have obtained evidence that MLE targets RNA hairpin structures at their site of transcription. This activity is independent of the sequences forming the hairpins or of the genomic location where their synthesis occurs. The MSL complex does not take part in this process. Of some interest is a set of published observations that may suggest a link between an effect of MLE in females and some observed phenotypes of mutants in the RNAi pathway. Adult females homozygous for an ovo gene null allele have no or very few germ cells. Introduction of the mle 1 mutation largely restores the germ line in these females [38]. Mutations in the genes that encode different components of the miRNA biogenesis pathway-Argonaute 1 (Ago 1), Dicer-1 (Dcr-1), pasha and Drosha-interrupt germ-line cell division and oocyte formation [39]. Connecting these two experimental dots is our observation that MLE co-immunoprecipitates with Argonaute 2 (Ago 2) and Dicer-2 [17]. In males, MLE is the only member of the MSL dosage compensation complex present in germ cells where it is not specifically associated with the X chromosome [40]. Recently, the RNAi pathway was shown to impact spermatogenesis and male fertility through the generation of endo-siRNAs from endogenous hpRNAs [41]. Although MLE has not been directly implicated in male gamete formation, it may be suggestive to note that the induced lhRNAs used in our experiments are more similar to endo-siRNAs precursors than to miRNAs precursors. MLE's targeting the transcripts of RNAi transgenes indicates that it may play an active role in the RNA interference pathway. In support of such a potential role, we have demonstrated that MLE is required for optimal RNAi efficiency. RNA helicase A (RHA/DHX9), the mammalian ortholog of MLE, plays a role in RNA interference. RHA associates with Ago 2 and Dicer in human cells [25], it is a component of the RISC complex, and it facilitates the assembly of the complex [26,42]. It is possible that MLE acts in a similar fashion, however our results indicate that MLE binds hairpin structures in primary transcripts and, therefore, appears to play a function at an earlier step in the biogenesis of interfering RNAs. A possible explanation is that the transcripts produced by RNAi transgenes may not require the action of the Drosha RNase because their sequence forms hairpins that are already structurally similar to conventional pri-miRNAs. Such a situation exists in the case of mirtrons, miRNAs that are produced from introns by the splicing machinery and that bypass Drosha cleavage [43]. In fact, the processing of endogenous long hairpin RNAs does not involve Drosha [44]. MLE may bind to the RNAi hairpins in order to facilitate their transfer to the cytoplasm, and it may, in a manner similar to RHA, also participate in the formation of the RISC complex. An alternative explanation is that MLE might resolve improper folding of the newly transcribed hairpin RNAs allowing them to enter the RNAi pathway. Another possibility is that MLE interferes with the editing activity of the ADAR enzyme. ADAR and the RNAi pathway are in an antagonistic relationship, likely due to the fact that they compete for the same substrates [45,46]. ADAR deamination of adenosines to inosine at multiple sites can lead to reduced complementarity and dsRNA instability limiting the synthesis of productive siRNA. MLE could physically block ADAR's binding to the hairpins or it could unwind the structure to compromise ADAR's recruitment. All the discussed scenarios are not mutually exclusive. Therefore, it is not unreasonable to consider that MLE could play a role in the Drosophila immune response against inverted repeat viruses, as well as in retrotransposon silencing and in the regulatory pathways controlled by the recently identified endogenous hairpin RNAs [41,44]. Moreover, the finding that not only long hairpins but also short hairpin RNAs recruit MLE suggest that MLE might be involved in the microRNA pathway as well. Flies homozygous for the RNAi constructs were crossed with flies containing Actin5C-Gal4 driver balanced either with CyO-GFP or TM6B, third instar larvae lacking the GFP or Tubby markers were selected for polytene staining while larvae showing the markers were used as controls. RNA extraction and qRT-PCR RNA was isolated from 10 larvae per sample using the Qiagen RNeasy mini-kit following manufacturer's instructions. iScript One-step RT-PCR kit with SYBR Green (BIO-RAD) was used for the real-time, reverse transcription-PCR. Transcription measurements were normalized to the Pka-C1 transcript and the ddCt method was used to calculate the fold difference. The results of three independent biological replicates were averaged. The primers used to detect Luciferase are: forward 5'-AGGTTCCATCTGCCAGGTATCAG-3' and reverse 5'-ACACACA GTTCGCCTCTTTGATTAAC-3'. The primers used to detect Pka-C1 are: forward 5'-TTCTC GGAGCCGCACTCGCGCTTCTAC-3' and reverse 5'-CAATCAGCAGATTCTCCGGCT-3' Luciferase assay Larvae were frozen at -80°C, thawed and homogenized in 100 μl of Promega Passive lysis buffer. The lysates were then frozen at -80°C and thawed at 37°C for three times then were spun at 13000 rpm for 5'. 20 μl were used for the luciferase assay according to manufacturers protocol. Relative light units (RLU) were normalized for protein content. The results are the average of three independent biological replicates RIP 50 of larvae per sample were homogenized in 1ml of lysis buffer containing 25 mM Tris-HCL pH 7.5, 300 mM NaCl, 2mM MgCl2, 5mM DTT, 0.5% NP40, protease inhibitor (Roche, Cat#05892791001), and RNasin (Promega, Cat# N2515) and incubated on ice for 10 min. Lysates were then sonicated at 0-3W and 2V for 27 seconds x 4 on ice before spinning at 14000 rpm for 10' at 4°C. The supernatant was pre-cleared with Dynabeads Proteinase G (Life Technologies, Cat# 10009D) for 1h at 4°C. 10% of the pre-cleared lysates was saved for protein quantitation for final normalization and the remaining 90% then incubated with 50 μl of Dynabeads Protein G slurry and equal volumes of either pAb-anti-MLE, or generic anti-mouse IgG (Jackson IR) antibody as IP control, for 1-2h at 4°C. After being washed five times with lysis buffer, the IPed beads were digested with RNase-free DNase RQ1 (Promega, Cat#6106) and RNasin in digestion buffer at 37°C for total 30min to remove genomic DNA resulting from non-specific binding. After washing with lysis buffer, the DNase-digested beads were treated with Trizol (Life Technologies, Cat# 15596018) and chloroform to extract the IPed RNA that was precipitated with glycogen, 3M NaAc, and isopropanol O/N at -80°C followed by spinning down at maximum speed, at 4°C, for 20 min. After washing with 75% ethanol, the IPed RNA pellets were resuspended in nuclease-free water for reverse transcription using SuperScript III reverse transcriptase (Cat# 18080-44, Life Technologies) to generate the first strand cDNAs using the primer 5'-TGAGTTTCAAATTGGTAATTGGACCCT -3'. The first strand cDNAs were applied as templates for real-time PCR using SYBR green and the following set of primers were used: Hrb87F-P1 forward 5'-TGCTTGGCAATAGCCTTCTTCA-3', Hrb87F-P1 reverse 5'-TCGATGACTACGATCCCGTTGACA-3', Hrb87F-P3 forward 5'-GCATTGTCGATCAT GTACGACT-3', Hrb87F-P3 reverse 5'-CTTCGGTTTCATCACGTACT-3'. Three independent biological replicates are presented for P1 and P3 primer pairs. The MLE-IP efficiency and the lhRNA enrichment in the MLE-IPed RNAs were calculated based on two levels of normalizations. First, concentrations of the crude protein in the starting lysates were normalized to make sure equal amount of protein were used for both the MLE-IP and the IgG-IP. Second, the average Ct numbers from the IgG-IPs were used for normalization to compare the lhRNA enrichment by the MLE-IP relative to the IgG-IP, expressed in terms of fold difference 2^-dCt. The quantitative fluorescence analysis of polytenes treated with RNase A was performed using imageJ software. For each polytene chromosome analyzed, three areas adjacent to the fluorescent band were selected to measure background levels, the mean background fluorescence was multiplied by the area of the fluorescent band and subtracted from the integrated density of the band in order to obtain the corrected total band fluorescence (CTBF). The T-Test was applied for statistical validation. Wing phenotypes from female flies, raised at room temperature, in which Notch dsRNA expression is induced by C96-GAL4. No difference is observed between the homozygous mutant mle 1 /mle 1 and the heteroallelic combination mle1/mleγ203. The slight difference in phenotype distribution observed in the mle 1 /mle 1 flies versus the one reported in Fig 6 is probably due to the difference in temperature at which the flies had been raised (room temperature here versus 25°C in Fig 9). (PDF) S1 Table. Hairpin length of RNAi stocks. (XLSX)
2016-05-12T22:15:10.714Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "aaad126b15b4c6366b424a21230997915890b41f", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosgenetics/article/file?id=10.1371/journal.pgen.1005761&type=printable", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "3a0ed2d8fef510646e65c21c9fca13b8d2b67a3d", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
117399193
pes2o/s2orc
v3-fos-license
Riemannian geometry of Kahler-Einstein currents II: an analytic proof of Kawamata's base point free theorem It is proved by Kawamata that the canonical bundle of a projective manifold is semi-ample if it is big and nef. We give an analytic proof using the Ricci flow, degeneration of Riemannian manifolds and $L^2$-theory. Combined with our earlier results, we construct unique singular Kahler-Einstein metrics with a global Riemannian structure on canonical models. Our approach can be viewed as the Kodaira embedding theorem on singular metric spaces with canonical Kahler metrics. Introduction This is a sequel to our earlier work [30]. A well-known theorem of Kawamata [18,19,20,17] states that if the canonical bundle K X of a projective manifold X is big and nef, then it must be semi-ample, i.e., the linear system |mK X | is base point free for some sufficiently large m ∈ Z. This is a very important result and has many deep generalizations and applications in the minimal model program. In particular, it implies the finite generation of the canonical ring and the abundance conjecture for minimal models of general type. Recent progress in Kähler geometry has revealed deep connections and interplay among nonlinear PDEs, Riemannian geometry and complex algebraic geometry. The solution to the Yau-Tian-Donaldson conjecture [48,39,40,13,14,8,9,10,42] has established the relation between the existence of Kähler-Einstein metrics and the K-stability for Fano manifolds, using the theory for degeneration of Riemannian Kähler manifolds by Cheeger-Colding [5,6] and Cheeger-Colding-Tian [7], and Hormander's L 2 -estimates on singular metric spaces to establish the partial C 0 -estimate proposed by Tian [39]. The analytic minimal model program with Ricci flow proposed by the author and Tian [31,32,33] connects finite time singularity of the Kähler-Ricci flow to geometric and birational surgeries, and its long time behavior to the existence of singular Kähler-Einstein metrics and the abundance conjecture. There have been many results in this direction [35,36,37,38,30]. It is further proposed by the author [30] that the Kähler-Ricci flow should give a global uniformization in terms of Kähler-Einstein metrics for projective varieties as well as a local uniformization in terms of the transition of shrinking and expanding solitons for singularities arising simultaneously from the Kähler-Ricci flow and birational transformation [28,29]. a birational morphism Φ : X → Y from X to a unique projective variety Y with canonical singularities and c 1 (Y ) = 0. Furthermore, there exists a unique smooth Ricci-flat Kähler metric g CY on Y • , the smooth part of Y , satisfying (1) g CY extends uniquely to a Kähler current ω CY ∈ c 1 (Φ * L) on Y with bounded local potentials , (2) the metric completion of (Y • , g CY ) is a compact metric length space (Y ∞ , d ∞ ) homeomorphic to the projective variety Y itself, (3) the singular set S of (Y ∞ , d ∞ ) has Hausdorff dimension no great than n − 4. In particular, Our approach follows the traditional and more constructive proof for the Kodaira embedding theorem by Hormander's L 2 -estimates without applying any sophisticated results from algebraic geometry such as the non-vanishing theorem. The canonical singular Kähler-Einstein metric on the minimal model of general type X plays an important role in both applying the analytic L 2 -estimates and proving that it coincides with the metric space from the degeneration of the Riemannian almost Kähler-Einstein metrics on X. Therefore our method can be viewed as the Kodaira embedding theorem on singular metric spaces with canonical Riemmanian Kähler metrics. We believe that it can be applied to prove the general base point free theorem of Kawamata for any big and nef divisor on a smooth projective manifold using the Kähler-Einstein metric with conical singularities. We also hope that our approach can lead to an analytic and Riemannian geometric proof for the finite generation of canonical rings on smooth varieties of general type, which is already proved by algebraic methods [1,27]. In general, if X is a projective manifold of positive Kodaira dimension, it admits a unique canonical twisted Kähler-Einstein current constructed in [31,32] (also see [45] for collapsing Calabi-Yau manifolds). We hope such analytic canonical metrics can be used to prove the abundance conjecture if the Riemannian collapsing theory for Kähler manifolds or the Kähler-Ricci flow can be established. A priori estimates for the Kähler-Ricci flow In this section, we will establish some basic estimates for the singular Kähler-Einstein metrics on smooth minimal projective manifolds of general type. Let X be a minimal manifold of general type of complex dimension n. Let Ω be a smooth volume form on X and let χ = √ −1∂∂ log Ω ∈ −c 1 (X). For any smooth Kähler form ω 0 ∈ H 1,1 (X, R) ∩ H 2 (X, Q), we consider the following Monge-Ampere flow Without loss of generality, we assume that ω 0 − χ is Kähler. Let ω(t) = χ + e −t (ω 0 − χ) + √ −1∂∂ϕ(t) and g(t) be the associated Kähler metrics. Then g(t) solves the normalized Kähler-Ricci flow where g 0 is the initial Kähler metric associated to the Kähler form ω 0 . Let h χ be a smooth hermitian metric on K X defined by h χ = Ω −1 . Since K X is big and nef, by Kodaira's lemma there exists an effective divisor D on X such that there exists ǫ 0 > 0 such that K X − ǫD is ample for all ǫ ∈ (0, ǫ 0 ). Therefore for any ǫ sufficiently small, there exists a smooth hermitian metric h D,ǫ such that χ − ǫRic(h D,ǫ ) is Kähler. We let σ D be the defining section of D and fix a smooth hermitian metric h D on D. (2) For any ǫ > 0, there exists C ǫ > 0 such that for all t ≥ 0, we have on X (3) There exist λ, C > 0 such that for all t ≥ 0, we have on X Proof. The first statement follows immediately from the maximum principle. The second and the third statement follow from Tsuji's tricks by applying the maximum principle to ϕ − ǫ log |σ D | 2 h D,ǫ for any ǫ ∈ (0, ǫ 0 ] and log tr ω 0 (ω) − A Aϕ − log |σ D | 2 h D,1/A for some fixed sufficiently large A > 0. The following lemma follows from the standard third order estimates (either by local estimates [25] or by global estimates [22] with weights ) and local higher order estimates. Lemma 2.2. For any k > 0 and compact set K ⊂⊂ X \ D, there exists C k,K > 0 such that for all t > 0, Lemma 2.3. ∂ϕ ∂t converges smoothly to 0 on any compact subset of X \ D. Proof. We apply a trick of Zhang [50] by looking at the evolution of the following quantity where t = ∂ ∂t − ∆ t and ∆ t is the Laplacian associated to g(t). Therefore there exist C 1 , C 2 > 0 such that for t ≥ 1, This implies that ϕ + C 2 e −t/2 decrease to ϕ ∞ ∈ P SH(X, χ) ∩ C ∞ (X \ D) and so ∂ϕ ∂t must tend to 0 away from D. The following corollary immediately from from Lemma 2.1, Lemma 2.2 and the proof of Lemma 2.3. Corollary 2.1. The solution ϕ(t) of the parabolic Monge-Ampere equation (2.1) converges to a unique ϕ ∞ ∈ P SH(X, χ) ∩ C ∞ (X \ D) as t → ∞. In particular, for any ǫ > 0, there We also have the following existence and uniqueness result. It suffices to prove the uniqueness. Suppose there exists another solution ϕ ′ . Then Since ψ ǫ tends to ∞ along D and ψ ǫ (0) ≥ 0 for sufficiently large A > 0, we can apply the maximum principle and so ψ ǫ ≥ 0 for all t ≥ 0. By letting t → ∞, we have Then it immediately follows from the comparison principle that ϕ ∞ = ϕ ′ on X \ D and so the lemma follows. We let h t = (ω(t)) −n be the hermitian metric on K X for t ∈ [0, ∞). Then we have the following lemma. Lemma 2.5. For any σ ∈ H 0 (X, mK X ), there exits C > 0 such that for all t ≥ 0, Without loss of generality, we can assume that for sufficiently large m ∈ Z, a basis {σ j } dm j=0 of H 0 (X, mK X ) gives a birational map from X into the projective space CP dm , where d m + 1 = h 0 (X, mK X ). We consider a resolution for the base locus {σ j } j π : X ′ → X such that π * (mK X ) = L + E, where L is semi-ample and E is the fixed part of π * (mK X ). We can assume that E is a divisor of simple normal crossings. Since L is big and semi-ample, there exists an effective divisor D ′ on X ′ such L − ǫD ′ is ample for all ǫ > 0. The closed form θ = m −1 √ −1∂∂ log( dm j=0 |σ j | 2 )| on X ′ \ E is the Fubini-Study metric which smoothly extends to X ′ globally in c 1 (L). There exists a smooth hermitian metric h D ′ on the line bundle associated to [D ′ ] such that for all sufficiently small ǫ > 0. Let Ω m = ( dm j=0 |σ j | 2 ) 1/m be the smooth real nonnegative (n, n)-form on X. We let where Ω t = ω(t) n . Then H is bounded above and smooth outside the base locus of {σ j } j and the evolution equation for H is given by t H = n − tr ω (θ). We now lift the above equation to X ′ and it is smooth on X ′ \ E. We now consider the Monge-Ampere equation for some smooth volume form Ω ′ on X ′ with X ′ Ω ′ = (2) −n X ′ θ n . By standard argument [46,15,32] where σ E is the defining section of E and h E is a smooth hermitian metric on the line bundle associated to E. Then G ǫ is smooth on X ′ \ (E ∪ D) and tends to −∞ along E ∪ D and we can apply the maximum priniciple for G ǫ on X ′ \ (E ∪ D). Then there exists C > 0 such that for sufficiently small ǫ > 0. By the maximum principle, G ǫ is bounded above uniformly for all t and ǫ. By letting ǫ → 0, H is uniformly bounded above and this proves the lemma. Then for any m and σ ∈ H 0 (X, mK X ), there exists C > 0 such that or equivalently there exists C > 0 such that on X, Proof. For simplicity, we write |σ| 2 and |∇ t σ| 2 for |σ| 2 h m t and |∇ t σ| 2 The evolution equation for |∇ t σ| 2 is given by Let H = |∇ t σ| 2 + A|σ| 2 . The lemma is then proved by applying the maximum principle to H after choosing sufficiently large A > 0. By Lemma 2.6 and the local smooth convergence of ω(t) on X \ D, we have the following corollary. Corollary 2.3. For any m and σ ∈ H 0 (X, mK X ), there exists C > 0 such that We remark that the constant C in (2.4, 2.5, 2.6, 2.7) depends on m and σ. Definition 2.1. Let R X be the set of all points p on X such that all µ-jets at p are globally generated by some power of K X for |µ| ≤ 2, where µ = (µ 1 , ..., µ n ) ∈ Z n is nonnegative. In local holomorphic coordinates z with p = 0, the µ-jets at p are given by n i=1 z µ i i . Then there exist m > 0 and a basis {σ j } dm j=0 of H 0 (X, mK X ) such that {σ j } j gives a local embedding in a small neighborhood of p into a projective space CP dm . Let θ be the pullback of the Fubini-Study metric. First we note that Therefore by maximum principle,φ is uniformly bounded above. Then we consider Then outside of the common base locus of {σ j } j , Since H = ∞ along the base locus of {σ j } j , from the maximum principle, H is uniformly bounded below. Now let G = log tr ω (θ) − AH. Then for sufficiently large A, applying the argument for the the parabolic Schwarz lemma in [31,32], we have Riemannian geometric limits In this section, we will apply the Cheeger-Colding theory [5,6,7] for degeneration of Riemannian manifolds with Ricci curvature bounded below, the work of Tian-Wang [43] for almost Kähler-Einstein metrics, and a local L 2 -estimates to study the Riemannian structure of (R X , g ∞ ) and its metric completion. We first pick a Kähler form for some sufficiently small ǫ 0 > 0. We now consider the following family of Monge-Ampere equations for k ∈ Z + . Let and g k be the corresponding Kähler metric. Then the curvature equation for g k is given by The following estimates follow by similar estimates from Section 2, using elliptic argument instead of parabolic estimates. Lemma 3.1. We have the following uniform estimates. (1) There exists C > 0 such that for all k > 0, , we have on X (2) for any ǫ > 0, there exists C ǫ > 0 such that for all k > 0, we have on X (3) there exist λ, C > 0 such that for all k > 0, we have on X In particular, by the uniqueness from Lemma 2.4, ϕ ′ ∞ = ϕ ∞ , where ϕ ∞ is the limiting potential from the Monge-Ampere flow (2.1). We will now verify in the following lemma for the almost Kähler-Einstein condition introduced in [43]. Lemma 3.2. Let g k be the solution of equation (3.9) for k ∈ Z + . Then g k satisfies the following almost Kähler-Einstein conditions. there exists p ∈ X \ D and r 0 , κ > 0 such that for all k, Proof. (1) and (2) follow easily from equation (3.10) and Lemma 3.1. Notice that the minimum of the scalar curvature is non decreasing along the Ricci flow while R(g k (0)) > −n. Therefore We then apply the main results of Tian-Wang [43] to obtain the following proposition. the limiting metric d ∞ induces a smooth Kähler-Einstein metric g KE on R satisfying Ric(g KE ) = −g KE , (3) the singular set S has Hausdorff dimension no greater than 2n − 4. The rest of the section is to prove that the regular part R coincides with R X and g KE coincides with g ∞ , the limiting Kähler-Einstein metric from the Kähler-Ricci flow. Definition 3.1. Let S X be the set of points q ∞ in X ∞ such that there exist a sequence of points q k ∈ (X \ R X , g k ) converging to q ∞ in Gromov-Hausdorff sense. By taking a diagonal sequence, it is obvious that S X is closed. The following lemma is the pointed version of Theorem 4.1 in [24] due to Rong-Zhang, establishing a local isometry and global homeomorphism between R X and X ∞ \ S X . Lemma 3.3. There exists a continuous surjection is a homeomorphism and a local isometry, where (R X , g ∞ ) is the metric completion of R X with respect to the smooth limiting metric g ∞ . Lemma 3.3 immediately implies the following corollary because all tangent cones at each point in X ∞ \ S X are the flat C n . We then want to show that S X ⊂ S. We look at the parabolic Monge-Ampere equation corresponding to the normalized Kähler-Ricci flow Therefore on X \ A, we have (3.11) The following theorem is due to Demailly [13] for solving global ∂-equation on pseudo effective line bundles on projective manifolds. Proof. Suppose not. There exist a sequence of points q k ∈ X \ R X such that q k converges to q ∞ ∈ R. Then there exists a sufficiently small r 0 > 0 such that the limiting metric d ∞ induces a smooth Kähler-Einstein metric g ′ ∞ on B d∞ (p ∞ , 3r 0 ). Using the modified Perelman's pseudolocaity theorem in [43], we know that B g k (1) (q k , 3r 0 ) converges smoothly to B d∞ (q k , 3r 0 ). More precisely, for each k, there exists a diffeomorphism → 0 for any fixed l > 0, where I k and I ∞ are the complex structures on B g k(1) (q k , 3r 0 ) and B d∞ (q ∞ , 3r 0 ). Therefore we can assume then that the curvature of g k on B g k (1) (q k , 2r 0 ) is uniformly bounded and the injectivity radius of g k at q k is strictly greater than 2r 0 , for all k. We can can pick complex coordinates z (k) = (z (k) 1 , ..., z (k) n ) on B g k (1) (q k , r 0 ) and there exists K > 0 independent of k such that on B g k (1) (q k , r 0 ) where d C n ,k is the Euclidean metric induced by z (k) . Let Let η be a smooth cut-off function with η(x) = 1 for t < 1/2 and η(x) = 0 for x ≥ 1. We now construct the weight Ψ k by We then define Then there exists m 0 such that for m ≥ m 0 , we have Now we fix such an m for all k and we consider η k,µ is smooth on X and hence it is L 2 -integrable with respect to (h k ) m e −Ψ k for sufficiently large k. Then we can solve for u k satisfying This forces u k to be holomorphic near q k and vanishes to order |µ|. This implies that η(ρ k /M )(z (k) ) µ − u k is a holomorphic section of mK X generating µ-jet at q k and so q k ∈ R X for sufficiently large k. This is contradiction. We remark that we cannot apply the proof of Lemma 4.4 in [30] to prove Lemma 3.4 since we do not have a contraction morphism from X to its canonical model. Immediately, we have the following corollary. Corollary 3.2. The metric completion of (R X , g ∞ ) is isomorphic to (X ∞ , d ∞ ). In particular, Therefore, the metric completion of (R X , g ∞ ) is isomorphic to (X ∞ , d ∞ ) and there is an isomorphism between (R X , g ∞ ) and (R, g KE ). We can now simply identify R and g KE with R X and g ∞ . We also have the following technical corollary. Corollary 3.3. Let D ′ be any effective divisor such that L − ǫD ′ is ample for some ǫ > 0. Then X \ D ′ ⊂ R X . L 2 -estimates We now pick a base point p in R X as in Lemma 3.2. (X, p, g k ) converges to the metric length space of (X ∞ , p ∞ , d ∞ ), where g k is defined in (3.9). We assume that diam g k (X) → ∞ because the proof for Theorem 1.1 is much simpler if diam g k (X) is uniformly bounded for all k and so the limiting space (X, d ∞ ) being a compact metric space. Let B ∞ (r) be the geodesic ball in (X ∞ , d ∞ ) centered at p ∞ with radius r > 0. Let B k (r) be the geodesic ball in (X, g k ) centered at p of radius r. Then B k (r) converges to B ∞ (r) in Gromov-Hausdorff topology as k → ∞. We will derive local L 2 -estimates on each B ∞ (r) for all r > 0. Let g KE = g ∞ be the limiting smooth Kähler-Einstein metric as the smooth limit of g k on R and h KE = h χ e −ϕ KE = (ω KE ) −n be the hermitian metric on K X on R, where ϕ KE = ϕ ∞ as the limiting Kähler potential in section 2 and ω KE is the Kähler-Einstein form associated to g KE . Each σ ∈ H 0 (X, mK X ) can be defined on R on X ∞ and Corollary 2.2 and Corollary 2.3 imply that sup Immediately we have the following corollary because R is an open dense convex subset of (X ∞ , d ∞ ). For convenience, we define the following pluricanonical system for X ∞ . Definition 4.1. We denote H 0 (X ∞ , mK X∞ ) by the set of all holomorphic sections in H 0 (X, mK X ) over R X . We need the cut-off functions constructed in the following lemma corresponds to Lemma 3.7 in [30]. Proof. The difference of Lemma 4.1 from Lemma 3.7 in [30] is that a priori we do not know if the local potential ϕ KE of ω KE is bounded. Without loss of generality, we can assume that |σ| 2 h D ≤ 1. Let θ be a Kähler metric on Y such that θ > Ric(h D ) and [θ] ≥ −c 1 (X). Let F be the standard smooth cut-off function on [0, ∞) with F = 1 on [0, 1/2] and F = 0 on [1, ∞). We then let η ǫ = max(log |σ| 2 h D , log ǫ). For sufficiently small ǫ, we have − log ǫ ≤ η ǫ ≤ 0. Then obviously, η ǫ ∈ P SH(X, θ) ∩ C 0 (X). Now we let Then ρ ǫ = 1 on K if ǫ is sufficiently small. We first notice that for fixed ǫ > 0 from the smooth uniform convergence of ω k on any compact subset of X \ D, where ω k is the almost Kähler-Einstein metric defined in equation (3.9). Straightforward calculations give where C only depends does not depend on ǫ and k as the class [ω k ] is uniformly bounded for all k. Hence Therefore we obtain ρ ǫ ∈ C 0 (X) satisfying the conditions in the lemma. The lemma is then proved by smoothing ρ ǫ on Supp ρ ǫ \ K. Lemma 4.2. For any The following proposition is essentially Proposition 4.5 in [30] for solving the following ∂-equation. Proposition 4.1. Let X be a smooth minimal manifold of general type. Let ω = mω KE ∈ −mc 1 (X) and h = (h KE ) m be a hermitian metric on (K X ) m for any m ≥ 2. Then for any smooth mK X -valued (0, 1)-form τ satisfying there exists an mK X -valued section u such that ∂u = τ and Proof. The proof is almost identical to that of Proposition 3.5 and Proposition 4.5 in [30]. The only difference is that a priori we do not know if ϕ KE is bounded. Following the proof of Proposition 4.5 in [30], we consider the Monge-Ampere equation for sufficiently small 0 < ǫ < ǫ 0 χ − ǫǫ 0 Ric(h D,ǫ 0 ) + √ −1∂∂ϕ ǫ n = e (1+ǫ)ϕǫ Ω. and let We now can apply the maximum principle with the key observation that there exists C > 0 such that for all ǫ ∈ (0, ǫ 0 ) Then for sufficiently small ǫ, we have Since ϕ ǫ converges to ϕ ∞ as ǫ → 0, we can apply Theorem 3.1 by writing mK X = (m − 1)K X + K X . We then can proceed as in the proof of Proposition 3.5 in [30]. Local freeness for the limiting metric space We will apply the same argument as in Section 3.4 and Section 4.4 [30]. For completeness, we include the definition of the H-condition introduced by Donaldson-Sun [14] and the sketch of proof for our main result in Proposition 5.2. Here all the norms are taken with respect to h and g. The constant C in the H condition depends on the choice (p * , D, U, J, L, g, h). Fix any point p on X ∞ , (X ∞ , p, mω ∞ ) converges in pointed Gromov-Hausdorff topology to a tangent cone C(Y ) over the cross section Y , where ω ∞ = ω KE . We still use p for the vertex of C(Y ). We write Y reg and Y sing the regular and singular part of Y . Y sing has Hausdorff dimension strictly less than 2n − 3. C(Y reg ) \ {p} has a natural complex structure induced from the Gromov-Hausdorff limit and the cone metric g C on C(Y ) is given by where r is the distance function for any point z ∈ C(Y ) to p. We can also write the cone metric g C = 1 2 √ −1∂∂|z| 2 . One considers the trivial line bundle L C on C(Y ) equipped with the connection A C whose curvature coincides with g C . The curvature of the hermitian metric defined by h C = e −|z| 2 /2 is g C . 1 is a global section of L C with its norm equal to e −|z| 2 /2 with respect to h C . The following lemma is due to [14]. By the construction in [14], we can always assume that both D and U are a product in satisfies the H-condition from Lemma 5.1. For any m ∈ Z + , we can define and µ m : U → U (m) by µ m (z) = m −1/2 z. The following proposition from [14] establishes the stability of the H-condition for perturbation of the curvature and the complex structure. Fix any point p, we can assume that (X ∞ , p, m v g ∞ ) converges to a tangent cone C(Y ) for some sequence m v in pointed Gromov-Hausdorff topology. In particular, on the regular part of C(Y ), the convergence is locally C 2,α and the metrics m v g ∞ converge locally in C 1,α . Fix any open set U ⊂⊂ C(Y reg ) \ {p}, This would induce embeddings χ mv : U → R = (X ∞ ) reg . Let g mv be the pullback metric of g ∞ on (X ∞ ) reg and J mv be the pullback complex structure. The following lemma follows from the convergence of (X ∞ , p, m v g ∞ ). Lemma 5.2. There exists v such that one can find an embedding χ mv such that The following is the main result in this section. Proposition 5.2. For any point q ∈ X ∞ , there exist m ∈ Z + and a holomorphic section σ ∈ H 0 (X ∞ , mK X∞ ) such that Proof. The proposition can be proved exactly the same argument for Proposition 3.9 and Proposition 4.6 in [30]. We remark that from Proposition 5.2, we cannot conclude σ(q) = 0, because ϕ KE might tends to −∞ near q. Global freeness We will complete the proof for Theorem 1.1 in this section by proving freeness of mK X at any fixed point q ∈ X for some sufficiently large m depending on q. First we want to show that for any fixed point q on D, the distance from q to a fixed point p ∈ R X is uniformly bounded for all g k . We consider the a log resolution is a union of smooth divisors with simple normal crossings and there exists O in the smooth part of π −1 1 (D) with π 1 (O) = q. We then blow up Z at O with π 2 :X → Z. Letπ = π 1 • π 2 :X → X. We have the following adjunction formula because X is smooth where (n − 1)E + F is the exceptional divisor ofπ, F j are effective prime smooth divisors oñ X with a j > 0 for j = 1, ..., J, E is the exceptional divisor of π 2 isomorphic to CP n−1 . Sinceπ * K X is big and nef, by Kodaira's lemma, there exists an effective divisorD such that its support coincides with the support of the exceptional divisors ofπ and π * K X − ǫD is ample for all sufficiently small ǫ > 0. We can always assume thatD =D ′ + E, where the effective divisor D ′ does not contain E. Let σ E , σ F and σD be the defining sections of E, F andD. Here we consider σ E , σ F and σD be the multivalued holomorphic sections which become global holomorphic sections after taking some power. Let h E , h F , hD be smooth hermitian metrics on the line bundles associated to E, F andD such that for a smooth volume formΩ onX and for some fixed sufficiently small ǫ 0 > 0, wherẽ χ = (π) * χ. Letω be a fixed smooth Kähler form onX. Then the Kähler-Einstein equation lifted tõ X is equivalent to the following degenerate Monge-Ampere equation whereφ KE = (π) * ϕ KE . We consider the following family of Monge-Ampere equations where ω 0 is the Kähler form defined as in equation (3.8). Let By Yau's theorem, equation (6.17) always admits a unique smooth solutionφ k,ǫ for all sufficiently small ǫ > 0. The following lemma corresponds to Lemma 4.9 in [30]. Proof. The uniform upper bound forφ k,ǫ follows from the mean value inequality for the plurisubharmonic function combined with the Jensen inequality and the uniform bound for To prove the lower bound forφ k,ǫ , we consider Thenφ k,ǫ,δ satisfies onX \D Let ψ k,ǫ,δ ∈ P SH(X,χ − δRic(hD)) ∩ C 0 (X) solves Then there exists C = C(δ) independent of k, ǫ such that ψ k,ǫ,δ L ∞ (X) ≤ C. Then the low bound forφ k,ǫ,δ follows immediately from the maximum principle. The estimate (6.19) can be proved by standard maximum principle using Tsuji's trick. Let B O be a sufficiently small Euclidean ball on Z centered at O and letB O = π −1 2 (B O ) inX. The support ofD ′ andF = (π 2 ) −1 (F ) − E, the proper transformation of F , lie in the subvariety defined by w = 0 for a holomorphic function w. Lemma 6.2 immediately implies the following claim. There exist λ, C > 0 such that for all k ∈ Z + and ǫ ∈ (0, 1), The following lemma is purely local and it corresponds to Lemma 4.11 in [30]. Lemma 6.4. Letω be the smooth closed nonnegative closed (1, 1)-form as the pullback of the Euclidean metric √ −1 n j=1 dz j ∧ dz j on B O .ω is Kähler onB O \ E. There exist C > 0, sufficiently small ǫ 0 > 0 and a smooth hermitian metric h E on E such that onB O , The following is the main estimate in this section and the proof follows from the proof of Proposition 4.7 in [30]. The only difference is that in our situationφ k,ǫ is not uniformly bounded in L ∞ , however, the estimate (6.18) suffices to achieve the same estimate (6.23) by increasing λ. Lemma 6.5. There exist ν, λ > 0 and C > 0 such that for any ǫ ∈ (0, 1) and k ∈ Z + , we have onB O , where ω k is the almost Kähler-Einstein metric defined in (3.9). Lemma 6.5 immediately implies the following corollary by letting ǫ and then k −1 → 0. Corollary 6.1. There exist ν, λ > 0 and C > 0 such that onB O , and for any k ∈ Z + , Proposition 6.1. For any q ∈ X, there exists a smooth path γ(t) for t ∈ [0, 1] such that (1) γ(t) ∈ X \ (D ∪ {q}) for t ∈ [0, 1) and γ(1) = q, (2) γ is transversal to D, (3) for any ε > 0, there exists δ ∈ (0, 1) such that for all k > 0 and t ∈ [1 − δ, 1], Proof. It suffices to prove the case when q ∈ D and then the proposition is proved by picking a sufficiently small line segment The proposition is then proved by applying estimate (6.23) as arc length of γ with respect to ω k, is uniformly bounded for all k. Corollary 6.2. Fix a base point p ∈ R X . Then for any point q ∈ X, there exists A q > 0 such that d g k (p, q) ≤ A q . Furthermore, q must converge in Gromov-Hausdorff sense to a point q ∞ ∈ X ∞ . Proof. The corollary immediately follows from Corollary 6.1 and the line segment chosen in Proposition 6.1, after letting ǫ → 0. The following is the local freeness for the pluricanonical system on X. Proposition 6.2. For any q ∈ X, there exist m ∈ Z + and σ ∈ H 0 (X, mK X ) such that σ(q) = 0. Proof. It suffices to prove the case when q ∈ D. Let q ∞ ∈ X ∞ be the limiting point of q. By Proposition 5.2, there exist m > 0 and σ ∈ H 0 (X ∞ , mK X ) such that |σ| 2 (h∞) m (q ∞ ) > 1. We now consider the sequence q j in the smooth path γ(t) in Proposition 6.1 such that for all k, d g k (q, q j ) < 1/j, lim j→∞ d g 0 (q j , q) = 0. Certainly q j ∈ (X, g k ) converges to the same point q j ∈ (X ∞ , d ∞ ) as k → ∞ by the smooth convergence of g k on R X for each fixed j. Then By continuity of |σ| 2 h m ∞ on (X ∞ , d ∞ ) from Lemma 4.1, there exists J > 0 such that for j > J, |σ| 2 (h∞) m (q j ) ≥ 1/2. This implies that there exists C > 0 such that for all j > J, ϕ ∞ (q j ) ≤ log |σ| 2 (hχ) m (q j ) + C. On the other hand, for any δ > 0, there exists C δ > 0 such that ϕ ∞ ≥ δ log |σ D | 2 h D − C δ . Therefore for any δ > 0, there exists C δ > 0 such that for all j > J, δ log |σ D | 2 h D (q j ) ≤ log |σ| 2 h m χ (q j ) + C δ or |σ D | 2δ h D (q j ) ≤ e C δ |σ| 2 h m χ (q j ). Since q j lies in a smooth path transversal to D as chosen in Proposition 6.1 and both σ D and σ are holomorphic, this implies that σ cannot vanish at q. This proves the proposition. We now can prove Theorem 1.1. Theorem 6.1. Let X be a smooth minimal model of general type. Then mK X is globally generated for some m sufficiently large. Proof. By Proposition 6.2, for any q ∈ X there exist m q ∈ Z + and σ q ∈ H 0 (X, m q K X ) such that σ q does not vanish at q. Then there exists an open neighborhood U q of q such that σ q does not vanish anywhere in U q . The theorem is then proved by the finite covering theorem since X is compact. As an application, we prove the L ∞ estimate for ϕ ∞ in Kähler-Einstein equation (2.3). Generalizations and discussions 7.1. Freeness for big and nef line bundles on Calabi-Yau manifolds. Using the same argument for Theorem 1.1, we can also prove Theorem 1.2 for the semi-ampleness of a nef and big line bundle L over a Calabi-Yau manifold X of dimension n. In fact, the proof is simpler, because one can obtain a uniform diameter bound for the approximating Ricci-flat Kähler metrics. We lay out the sketch of the proof and leave the details for the readers to check. Then Ric(ω k ) = 0, lim k→∞ c k = 0 and diam g k (X) is uniformly bounded. We can adapt the arguments in previous sections as well as the argument in Section 3 of [30] to prove Theorem 1.2. 7.2. Kawamata's base-point-free theorem. Using the constructions of conical Kähler-Einstein metrics, one should also be able to prove the following base-point-free theorem of Kawamata [18,19] with an additional assumption on the bigness of the divisor D. Theorem 7.1. Let X be a projective manifold. If D is a big and nef divisor such that aD − K X is big and nef for some a > 0, then D is semi-ample. The assumption for D being big is the noncollapsing condition to guarantee the existence of Kähler-Einstein current ω KE ∈ c 1 (aD) satisfying where η ≥ 0 is a nonnegative current in c 1 (aD − K X ) on X and it vanishes on a Zaraski open dense subset of X. We will leave the more detailed discussion in our future work. 7.3. Toward finite generation of canonical rings and abundance conjecture. We believe that our approach can also be applied to understand the finite generation of canonical rings, which is proved in [1] and [26,27]. The canonical Kähler-Einstein current on X is constructed in [30,2] and we hope that the scheme in our paper and in [32] can lead to an analytic and Riemannian geometric proof for finite generation of canonical rings for projective manifolds of general type. Another approach is the Kähler-Ricci flow through singularities as developed in [33] and this should lead to deeper understanding for analytic and geometric aspects of singularities and flips in the minimal model program.
2014-09-30T03:18:38.000Z
2014-09-30T00:00:00.000
{ "year": 2014, "sha1": "826d597d817de18c4590f4c44bd756ae03514c12", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "826d597d817de18c4590f4c44bd756ae03514c12", "s2fieldsofstudy": [ "Mathematics", "Physics" ], "extfieldsofstudy": [ "Mathematics" ] }
267515493
pes2o/s2orc
v3-fos-license
Synthesis rifaximin with copper (Rif-Cu) and copper oxide (Rif-CuO) nanoparticles Considerable dye decolorization: An application of aerobic oxidation of eco-friendly sustainable approach In this study, rifaximin with copper (Cu) and copper oxide (CuO) nanoparticles (NPs) were synthesised. The resultant CuO nanoparticles were used to degrade Rhodamine B (RhB) and Coomassie Brilliant Blue (G250). Rifaximin copper and copper oxide nanoparticles were characterised using Fourier-transform infrared spectroscopy (FTIR), X-ray diffraction (XRD), scanning electron microscopy (SEM), ultraviolet–visible spectroscopy (UV), X-ray Photoelectron Spectroscopy (XPS), Transmission Electron Microscopy (TEM), and gas chromatography-electrochemical mass spectrometry (GC-EI-MS). An FT-IR study confirmed the formation of Cu in the 562 cm-1 peak range. Rifaximin Cu and CuO Nanoparticles displayed UV absorption peaks at 253 nm and 230 nm, respectively. Coomassie Brilliant Blue G250 was completely decolourised in Cu nanoparticles at 100 %, and Rhodamine B was also decolourised in Rifaximin CuO nanoparticles at 73 %, although Coomassie Brilliant Blue G250 Rifaximin Cu nanoparticles absorbed a high percentage of dye decolorization. The aerobic oxidation of isopropanol conversion was confirmed by GC-MS analysis. Retention time of 27.35 and 30.32 was confirmed using Cu and CuO nanoparticles as the final products of 2-propanone. It is used in the textile and pharmaceutical industries for aerobic alcohol oxidation. Rifaximin CuO nanoparticles highly active in aerobic oxidation. The novelty of this study is that, for the first time, rifaximin was used for the synthesis of copper and copper oxide nanoparticles, and it successfully achieved decolorization and aerobic oxidation. Introduction Rifaximin is a semisynthetic antibiotic that is derived from rifamycin and has poor gastrointestinal absorption, but exhibits excellent bactericidal activity [1].It is commonly used to treat various conditions such as diarrhoea, irritable bowel syndrome, ulcerative colitis, and hepatic encephalopathy [2].Rifaximin demonstrates a broad range of activity against both gram-positive and gram-negative bacteria, as well as aerobic and anaerobic microorganisms.It is believed to alter the intestinal environment [3].Thus, the RFX with the molecular formula (C 43 H 51 N 3 O 11 ), and chemical structure in Scheme 1 is as a widely used antibiotic for treating infectious diseases, a rapid, sensitive, and selective assay is necessary for its administration [4].In recent years, material scientists have prioritised environmentally sustainable methods for synthesising nanoscale materials [5].The synthesis of rifaximin nanoparticles, particularly using various rifaximin solutions, is an emerging field in environmentally friendly chemistry that is believed to be simple, cost-effective, and non-toxic [6].The use of eco-friendly products for curing and preventing human diseases has increased in recent years, attracting the attention of individuals in both developed and developing countries [7].The widespread use of nanoparticles in various fields such as biotechnology, agriculture, chemistry, material science, energy, consumer goods, defense, optics, electronics, heavy industries, communications, medicine, microbiology, environmental remediation, and engineering is due to their unique properties [8].Additionally, they are used in applications such as climate change, food, clothing, healthcare, treatment of lethal diseases such as cancer and respiratory infections, Alzheimer's disease [9], cosmetics, water treatment, and diagnostics [10].Nanoparticles can be produced using various techniques, including physical, chemical, and green methods.Among these methods, green synthesis is a sustainable and cost-effective way to produce nanoparticles that does not involve the use of hazardous chemicals or high temperatures and pressures.Various types of nanoparticles, including Ag [11], Fe [12], Sb [13], Ca [14], and Au nanoparticles [15], have been synthesised using this environmentally friendly method [16].The advantages of inorganic materials, such as copper and nickel nanoparticles, extend to energy management, textiles, batteries, healthcare, catalysis, cosmetics, semiconductors, and chemical sensing [17].The utilisation of copper in nanostructures has gained significant interest in recent years owing to its diverse applications across various fields of science and technology [18].Examples of oxide nanoparticles such as CuO [19], MgO [20], AgO [21], CeO2 [22], ZnO [23], SnO 2 [24], and BaO nanoparticles [25] are highly valued for their numerous advantageous properties, including superconductivity at high temperatures, electron correlation effects, and spin dynamics, making them powerful tools in various applications.Isopropanol, also known as isopropyl alcohol (IPA), is a crucial chemical raw material and organic solvent due to its alcohol molecular structure [26,27].It is utilised in numerous medications, serves as a solvent and chemical intermediary, and is used to produce acetone and acetone derivatives, antiseptics, food additives, cosmetics, coatings, and other products.As a volatile, colourless solvent, it is commonly referred to as 2-propanol or isopropyl alcohol [28]. Wool fabrics are frequently dyed with water-soluble Rhodamine B dyes, which are also used in the textile industry.Rhodamine B is employed in various industries, including paper, plastics, printing, and leather, as a cosmetic, dye, photosensitizer, water tracer, paint manufacturing, food processing, fluorescent marker for microscopic structural investigations, and biological dye for biomedical research [29].Coomassie Brilliant Blue (CBB) is a dye that is commonly used to stain proteins in a variety of electrophoretic profiles and to quantify solutions.There are two types of CBB dyes, CBB R250 and CBB G-250, which differ in the presence of additional methyl groups.The number "250″ in the dye name indicates the concentration of dye, and "Coomassie" is a trademarked name owned by Imperial Chemical Industries (ICI) [30].CBB G250 is commonly used to determine protein concentration [31].Industrial expansion and environmental pollution have negative effects on human health, with water contamination being a serious problem, particularly in wastewater from the food, beverage, leather, cosmetic, dyeing, textile, and printing industries.Traditional wastewater treatment methods are generally insufficient for dealing with synthetic dyes, which may persist in the environment for extended periods and harm ecosystems [32].Coomassie Brilliant Blue G250 and Rhodamine B are examples of dyes with various chemical structures.Organic compounds can be oxidised to produce the desired compounds, and oxidants are also reduced simultaneously.Aerobic oxidation is the most effective method for preventing oxidation from contaminating the environment [33].Efficient and cost-effective catalytic systems for aerobic oxidation processes are crucial due to significant advancements in this field.However, conventional transition-metal-catalysed aerobic oxidation processes often require precise reaction conditions and expensive radical initiators [34].Oxidation is necessary for activating and controlling radical reactions to produce oxygenated chemicals, such as ethanol, aldehydes, ketones, carboxylic acids, and epoxides from hydrocarbons [35].Developing efficient and long-lasting catalytic processes for selective oxidation reactions is important in the pharmaceutical and chemical industries because oxidation is a common concern in the manufacturing of fine chemicals [36].To our knowledge, the present study is the first to focus on exploiting copper and copper oxide nanoparticles to enhance the determination of rifaximin using dye decolorization and aerobic oxidation techniques.Scheme 1. Synthesis of Cu and CuO Nanoparticles (Rifaximin-Cu-CuO-Nanoparticles). Reagents and materials All chemicals were of analytical grade and were procured from Sigma-Aldrich.A Thermo Scientific Nicolet iS5 FTIR spectrometer, with a range of 4000-400 cm − 1 , was utilised to examine all substances.PerkinElmer GCMS model Clarus SQ8 (EI) was employed to record mass spectra. Preparation of rifaximin copper nanoparticles The copper (II) chloride dehydrate (0.01 mmol) and rifaximin (0.01 mmol) both dissolve in ethanol, and the solutions are placed in a round-bottom flask.After the mixture was stirred in a magnetic stirrer for 10 min without heating, 1 mM of sodium hydroxide pellets were dissolved in ethanol, and a few drops of the NaOH solution were added.The solution turned dark green to a light blue colour at room temperature.The Cu nanoparticles solution was filtered through ethanol and dried, and the mixture was stirred again for 30 min, during which time it turned from light blue to green.This procedure resulted in the production of 183.7 mg (95 %) of Cu nanoparticles (Scheme 1). Preparation of rifaximin CuO nanoparticles Rifaximin Cu nanoparticles are heated at 70 • C. The blue colour of the Cu nanoparticles changed to that of black CuO nanoparticles.Subsequently, the mixture was cooled and collected.This procedure resulted in the production of 171.3 mg (81 %) of CuO nanoparticles (Scheme 1). Aerobic oxidation of isopropanol conversion The synthesis of Rifaximin Cu and CuO nanoparticles and the development of novel methods that use substituted aromatic Scheme 2. Aerobic oxidation of Isopropanol conversion. Table 1 Reaction transformation of hydroxyl compounds used in industry.aldehydes as essential building blocks for the synthesis of molecules with a variety of functions.GC-EI-MS separation was optimised using the parameters described in the techniques section, resulting in the separation of all target components.Finally, isopropanol was completely and more quickly evaporated in 2-propanone than in various aliphatic alcohols.A recovery experiment was conducted to evaluate the stability and activity of the synthesised Rifaximin Cu and CuO nanoparticles (Scheme 2).Reaction transformations of the hydroxyl compounds used in industry (Table 1). Dye decolorization of rifaximin copper and copper oxide nanoparticles During the decolorization process, organic contaminants with vivid colours were removed from the sample mixture.When the solid product and impurities are thoroughly dissolved in a suitable solvent, this technique is typically applied in the solution phase.Analysis of decolorization activity: the dye was dissolved in water in a 5 mL volumetric flask containing 0.2 m mol/liter of each of the two different organic dyes, namely Coomassie Brilliant Blue G250 (CBBG250), and Rhodamine B. Next, 2 mg of the synthesised Cu and CuO nanoparticles was added.The sample tubes were sealed with Cu and CuO nanoparticles. Characterization of copper and copper oxide nanoparticles (Cu and CuO NPs) 2.6.1. Fourier transform infrared spectroscopy (FTIR) The FTIR investigation was conducted using a KBr pellet approach with a 4000-400 cm − 1 determined spectral range using a Tensor 27 (Nicolet iS5 FT-IR KBR Windows Spectrometer from Thermo Scientific).The nanoparticles were combined to produce pellets using 2 mg dry powder and 200 mg KBr [37]. X-ray Photoelectron Spectroscopy (XPS) The X-ray photoelectron spectrometry (XPS) technique, which was coupled with an Al K X-ray light source and featured ion generators with an energy range of 100-3 K eV, was used to investigate the chemical states and surface chemistry of the elements [33]. X-ray diffraction (XRD) An X-ray diffractometer was used to examine the XRD patterns of Cu and CuO nanoparticles.The diffractograms were recorded using K Cu X-ray energy with Ni-filtration (λ = 1.54 Å) at room temperature (Brucker Corp., Ettlingen, Germany, Model D8 Advance).XRD tests were performed in the 5-40 • 2 range with 0.02 • 2θ increments [39]. Scanning electron microscopy (SEM) Scanning electron microscopy (MIRA3 SEM model, Tescan, Czech Republic) was employed to assess the dimensions and shape of the Cu and CuO nanoparticles.A voltage of 15 kV was used for the imaging process [40]. Transmission electron microscopy (TEM) The morphology, shape, and size distribution of the nanoparticles were analyzed using field-emission transmission electron microscopy (FE-TEM, FEI Tecnai F20).In the preparation method, grids covered with nickel carbon were filled with solutions containing nanoparticles from the samples and the solvent was allowed to evaporate at room temperature.The elemental makeup of the bimetallic nanoparticles was analyzed using a 200 kV ultrathin electronic window, a Genesis liquid nitrogen-cooled energy-dispersive X-ray spectroscopy (EDX) detector, and spot size of 1.2 mm and 1.7 mm for Cs and Cc, respectively [40]. Gas chromatography and mass spectral analysis (single quadrupole GC-EI-MS) PerkinElmer GCMS model Clarus 690-SQ8MS (EI) with EA-1 [dimethyl polysiloxane] 30 m × 0.32 mm x 0.25 m columns was used for the analyses.GC-EI-MS was performed using the following parameters: injection volume, 0.5 μL, 250 • C, and 20:1 split ratio.After maintaining the temperature at 50 • C for a minute, the temperature was increased to 280 • C at 15 • C/min for 2 min.The transport gas was helium (99.9995 %) and the impact gas was nitrogen (99.999 %).Helium was used as the carrier gas, and both the transmissionline temperature and flow rate were adjusted to 1 mL/min and 250 • C, respectively.A solvent delay of 2 min was followed by a scan rate of 1500 Da/s that covered m/z 15-502.The quadrupole and source had a temperature of 220 • C [33]. FTIR analysis of rifaximin Cu and CuO nanoparticles FT-IR spectrum analysis was used to examine the surface characteristics of rifaximin-Cu and CuO nanoparticles.To determine the contribution of rifaximin and associated functional groups to the synthesis and stability of catalysis, the FT-IR spectra of the extract before and after the creation of Cu and CuO nanoparticles were recorded.The FTIR spectra of the Cu-O and Cu-O-H samples revealed the formation of Cu nanoparticles, as evidenced by the appearance of new peaks in the absorption bands in the 3337-476 cm -1 range.Additionally, the bending absorptions at 562 cm -1 were attributed to bonds in the Cu nanoparticles.The infrared spectrum of monoclinic CuO revealed the presence of a Cu-O bond in the range of 520 cm -1 , while the metal-oxygen (M − O) stretching vibrations of this compound were observed at 784 and 789 cm -1 .The surface of the CuO nanoparticles showed stretching and bending vibrations of water molecules and -OH group absorption peaks appearing at 1636.2 and 3337.8 cm -1 (Fig. 1).The presence of copper (Cu) was verified through infrared (IR) band measurement in the range of 550-600 cm -1 [41], while the CuO nanoparticles exhibited an absorption peak in the range of 200-800 cm -1 [42]. UV-visible studies of Cu and CuO nanoparticles UV-vis spectroscopy was used to characterise the synthesised rifaximin, Cu, and CuO nanoparticle.The UV-vis spectrum of rifaximin displays an absorption peak at 330 nm [43], while Cu shows an absorption peak at 253 nm [44].The absorption peak at 230 nm, attributed to the core-shell structure of the CuO nanoparticle, may be responsible for the reduced plasmon band in the bimetallic nanoparticle [45].The UV-vis spectra of the rifaximin, Cu, and CuO nanoparticle (Fig. 2). X-ray Photoelectron Spectroscopy (XPS) analysis of rifaximin CuO nanoparticles XPS was used to analyse the surface chemistry and oxidation states of the NPs.The CuO NPs were determined to contain Cu, O, and C in the XPS survey scan; however, no other impurities were detected (Fig. 3).The XP spectra of Cu 2p electrons displayed Cu 2p3/2 and Cu 2p1/2 with binding energies of 931.26 and 950.97 eV, respectively, and a spin-orbit splitting of 19.71 eV.These findings correspond to the outcomes of earlier research [33]. The crystalline phase and structure of the CuO nanoparticles were analyzed using powder X-ray diffraction PXRD tests showed that the copper material produced had a cubic lattice and a zero oxidation state.XPS analysis also supported this conclusion.Based on the XRD pattern, it can be inferred that the initial formation of copper hydroxide, which was subsequently dehydrated and thermally decomposed to produce Cu 2+ oxide, may also be due to the incorporation of CuCl 2 into the CuO shell.The Cu 2+ and CuO components contained copper without a doubt.The peak of Cu 2p3/2 at 932.4 eV is due to CuO, which is most likely caused by CuO nanoparticles that are exposed to air and begin to oxidise. X-ray diffraction (XRD) analysis of rifaximin Cu and CuO nanoparticles The XRD patterns of the Cu and CuO nanoparticle displayed distinct peaks at specific angles (Fig. 4).The Cu nanoparticle exhibited peaks at 32.45 • , 56.28 • , 74.96 • , and 80.03 • , which correspond to the (110), ( 200), (111), and (112) planes, respectively.On the other hand, the CuO nanoparticle showed peaks at 32.46 • and 56.27 • , which corresponded to the (111) and ( 200) planes, respectively.The strong congruence between our results and the values reported in the literature for metallic copper and its face-centred cubic (FCC) structure which matched the data from standard JCPDS (No. 04-0836) [46]. Scanning electron microscopy (SEM) analysis of rifaximin Cu and CuO nanoparticles The Cu and CuO nanoparticles' morphology and size were analyzed using SEM.The rifaximin were investigated using SEM at 5000 × and 30,000× magnification, and the corresponding micrograph is displayed in (Fig. 5).The size of the nanoparticle plays a significant role in determining its characteristics and bioactivity.In this study, Rifaximin Cu nanoparticles were found to have sizes less than 10 μm and were predominantly spherical in shape [47].The results of EDX analysis (Fig. 6a and b) revealed the presence of Cu, C, and O in the synthesised nanoparticles, with carbon (4.00 %) and copper (96.00 %) present in the Cu nanoparticle, and oxides (16.08 %) and copper (83.92 %) present in the CuO nanoparticle. Transmission electron microscopy (TEM) analysis of rifaximin CuO nanoparticles CuO nanoparticle morphology and surface structure examined using TEM and SAED.The CuO nanoparticle exhibited a uniform distribution (Fig. 7).The TEM image (Fig. 7a) reveals a uniform distribution of the CuO nanoparticle with a size of 10 nm (Fig. 7b).The crystal structure of CuO (002) was confirmed by the lattice fringes with a spacing of 0.250 nm that were visible in the TEM image (Fig. 7c).The SAED data (Fig. 7d) display a consecutive dotted pattern, indicating a polycrystalline structure with a face-centred cubic (FCC) arrangement.The elemental distributions of Cu and O are very similar at 10 nm (Fig. 7e and f), which also provides proof of the form of the CuO nanoparticle [48]. Aerobic oxidation analysis GC separation was optimised using the parameters described in the techniques section, resulting in the separation of all target components.As part of our continued dedication to the development of unique procedures isopropanol was transformed into 2-propanone [49] with rifaximin CuO nanoparticles for numerous functions.Rifaximin-supported copper nanoparticles were successfully synthesised by transforming isopropanol into 2-propanone while maintaining the catalytic activity of the nanoparticles (Scheme 3).Various aerobic oxidation were repored in Supporting Information in (Table 1). Cotton in a tube was used to prevent the formation of crude ethyl acetate.After a week, the crude evaporated and the volatile substance, isopropanol Rifaximin Cu and CuO nanoparticles indicated bottle A and Rifaximin CuO nanoparticles indicated bottle B Both bottles were settled in cotton and the evaporation of Rifaximin Cu and CuO nanoparticles (aerobic oxidation) (Figs. 8 and 9).Cotton was dissolved in ethyl acetate, and the dissolved solution was injected into the GC-MS system.Retention time of 27.35 and 30.32 was confirmed by GC-EI-MS using Cu and CuO nanoparticles as the final products of 2-propanone (Figs. 10 and 11), and the appearance of a single peak in the GC chromatogram indicates that the components of the mixture, CuO nanoparticles, were completely isolated from one another.Rifaximin Cu and CuO nanoparticle were separated from the combination according to the presence of a single peak in the GC chromatogram.The molecular weight was determined by mass spectral characterisation (EI-MS), which showed a molecular ions m/z of 58.72 (M + , 8 %), which was confirmed by the molecular weight of isobutyraldehyde conforming to the molecular mass using EI-MS mass spectral analysis (Fig. 12).shows the EI-MS spectrum of 2-propanone. Dye decolorization of rifaximin Cu and CuO nanoparticles Coomassie Brilliant Blue G250 and Rhodamine B are both used in the dye decolorization process.The dye decolorization process was divided into before and after decolorization of Coomassie Brilliant Blue G250 and Rhodamine B Cu and CuO nanoparticle (Figs.[13][14].In particular, Cu nanoparticles (100 %) were highly decoloured in Brilliant Blue dye compared with the rifaximin base solution (5 %) and rifaximin-CuO nanoparticles (81 %) (Scheme 4).The synthesised rifixamin Cu and CuO nanoparticles were involved in the dye decolorization process using rhodamine B dye.After 32 h, the rifaximin CuO nanoparticles absorbed 81 % of the dye, but the Cu nanoparticles did not absorb rhodamine B dye because our Cu nanoparticles were inactive in rhodamine B dye compared with CuO nanoparticles Scheme 5 and 6.The above dye decolorization process percentage is confirmed by time-dependent dye decolorization graph (Figs.[15][16].Finally, the rifaximin-Cu nanoparticles absorb a high percentage of dye decolorization in Coomassie Brilliant Blue G250 (100 %). Conclusion Investigation of rifaximin Cu and CuO nanoparticle using FT-IR, UV, SEM, TEM, XPS, XRD, and GC-EI-MS.The high purity of Rifaximin Cu nanoparticle was confirmed by XRD measurements, which revealed no extra points in the X-ray diffraction pattern.TEM images of Rifaximin CuO nanoparticle showed mostly spherical particles, 20 nm in size, with SEM images of 10 nm size.EDX analysis demonstrated 96.00 and 83.92 wt percent of the elements in the synthesised CuO nanoparticles with high Cu content.The synthesised Rifximin Cu and CuO nanoparticles were effective in decolourising Coomassie Brilliant Blue G250 and Rhodamine B in aqueous solutions.Rifaximin Cu nanoparticle achieved complete decolorization of G250 (100 %), whereas Rhodamine B decolourised by 73 % in Rifaximin CuO nanoparticle.Comparatively, rifaximin Cu nanoparticle showed better dye decolorization due to higher dye absorption in Coomassie Brilliant Blue G250.Accordingly, rifaximin copper oxide nanoparticles have the potential to serve as a reliable, ecofriendly, and cost-effective solution for the treatment of dye-contaminated water.As a result, Rifaximin CuO nanoparticle are the best natural environmental catalyst for aerobic oxidation, and it is exhibit good catalytic performance in aerobic oxidation, are nontoxic and safe for the environment, and have a more stable recoverability.In the past, there has been no recorded application of rifaximin copper and copper oxide nanoparticles for the purposes of decolorization and aerobic oxidation.However, we successfully achieved these processes using rifaximin, copper, and copper oxide nanoparticles. J .Mullaivendhan et al. J .Mullaivendhan et al. Fig. 13 . Fig. 13.Dye Decolorization: Rifaximin (A), Rifaximin Cu NPs (B) and Rifaximin CuO Nanoparticles (C) in Coomassie brilliant blue G250 Dye.(For interpretation of the references to colour in this figure legend, the reader is referred to the Web version of this article.) Fig. 15 .Fig. 16 . Fig. 15.Time-dependent dye decolorization of Coomassie Brilliant Blue G20.(For interpretation of the references to colour in this figure legend, the reader is referred to the Web version of this article.)
2024-02-07T16:12:38.547Z
2024-02-01T00:00:00.000
{ "year": 2024, "sha1": "331e0e57a3e5cd10fa9d15334aee514087bd3c3b", "oa_license": "CCBYNCND", "oa_url": "http://www.cell.com/article/S2405844024013161/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6a2b7af85e73129684990452ceb1c2edd04025bb", "s2fieldsofstudy": [ "Environmental Science", "Chemistry" ], "extfieldsofstudy": [] }
219002148
pes2o/s2orc
v3-fos-license
Web-Based Tool for the Development of Intensity Duration Frequency Curves under Changing Climate at Gauged and Ungauged Locations : Rainfall Intensity – Duration – Frequency (IDF) curves are among the most essential datasets used in water resources management across the globe. Traditionally, they are derived from observations of historical rainfall, under the assumption of stationarity. Change of climatic conditions makes use of historical data for development of IDFs for the future unreliable, and in some cases, may lead to underestimated infrastructure designs. The IDF_CC tool is designed to assist water professionals and engineers in producing IDF estimates under changing climatic conditions. The latest version of the tool (Version 4) provides updated IDF curve estimates for gauged locations (rainfall monitoring stations) and ungauged sites using a new gridded dataset of IDF curves for the land mass of Canada. The tool has been developed using web-based technologies and takes the form of a decision support system (DSS). The main modifications and improvements between version 1 and the latest version of the IDF_CC tool include: (i) introduction of the Generalized Extreme value (GEV) distribution; (ii) updated equidistant matching algorithm (QM); (iii) gridded IDF curves dataset for ungauged location and (iv) updated Climate Models. Introduction Rainfall Intensity-Duration-Frequency (IDF) curves describe the relationship between rainfall intensity, rainfall duration, and the probability of exceedance given by the return period (frequency) and are used for many water management applications, including the design of major and minor stormwater management systems, sanitary sewers, detention ponds, culverts, bridges, dams, pumping stations, and roads, among others [1].In Canada, we are witnessing a growing demand for robust methods and tools to assist rapid evaluation of the future extreme rainfall events and their impact on IDF curves.According to [1], the increase in the demand for rainfall IDF information can be summarized as follows: (i) as the spatial heterogeneity of extreme rainfall patterns becomes better understood and documented, a stronger case is made for the value of "locally relevant" IDF information; (ii) as urban areas expand and evolve, watersheds are generally becoming less permeable to rainfall and consequently, experiencing increased runoff, and many existing water infrastructures are increasingly failing to perform at the services levels for which they were designed.Understanding the full magnitude of the deficit the systems are subject to requires information on the maximum inputs (extreme rainfall events) with which drainage works must contend; and (iii) climate change will likely result in an increase in the intensity and frequency of extreme precipitation events in most regions in the future [2]. One of the climate change impacts is intensification of the global hydrologic cycle, causing increased intensity of wet and dry extremes and resulting floods and droughts [3].Many studies have suggested that climate change will have considerable impacts on extreme rainfall and associated management of water infrastructure [4][5][6][7][8][9][10].Efforts have been made to better understand and improve the reliability of the projected future precipitation events [11] with the adjusted forcing scenarios defined for CMIP5 (Coupled Model Intercomparison Project Phase 5).In particular, the PDRMIP (Precipitation Driver and Response Model Intercomparison Project) [12] is focusing on evaluating the climate change drivers of precipitation with changes over land versus ocean and key regions of the globe.It is expected that the PDRMIP will contribute to the adjustment of forcing scenarios for the CMPI6 (Coupled Model Intercomparison Project Phase 6), currently in full swing [12,13].There has been a notable increase in damages associated with extreme rainfall events in urban municipalities and some examples of the impacts of extreme rainfall events on large urban centers in Canada and the economic losses are presented in [14,15].Lack of readily and easily accessible data, information to assess adaptation options, and availability of technical resources to implement adaptation options have been identified as barriers to climate change adaptation [16].In addition, much of the work on the impacts of climate change on design standards has been conducted in the academia with limited availability of research results to practitioners.Political factors may also inhibit application of design standards that reflect increasing intensity and frequency of extreme events.Further, there exists a level of uncertainty associated with future climate projections, in particular with the uncertainty surrounding the future greenhouse gas concentration scenarios, also known as Representative Concentration pathways [11][12][13]17] (RCPs), creating difficulty in application of results.The issue is further aggravated by the presence of various uncertainties associated with the use of several distinct GCMs (Global Climate Models) given the limited capacity for projecting, with high accuracy, longer term precipitation events and large spatial scales (usually larger than the size of most watersheds) and distinct potential downscaling techniques. Because of changing conditions, IDF values will optimally need to be updated more frequently than in the past, and climate change scenarios are required to inform IDF calculations.The main assumption in the process of developing IDF curves is that historical series are stationary and, therefore, can be used to represent future extreme conditions.This assumption is not valid under rapidly changing conditions, and therefore, IDF curves that rely only on historical observations will misrepresent future conditions [18,19].In the presence of climate change, the statistical characteristics of the historical observed rainfall events will be different for future conditions.Limited information is available on how to bring climate change into the IDF calculations [20][21][22][23][24][25] and even less on how to implement updated IDF curves in practice [26].Authors such as [27,28] have presented decision support systems to assist in the calculation of IDF curves, using GIS tools and Microsoft Excel, respectively. The rainfall IDF_CC tool is designed to address this gap.The authors and supporting agencies strongly believe that a publicly available, computerized tool for updating IDF relationships under changing climate would aid in the selection of effective climate change adaptation options at the local level, advancing the decision-making capabilities of municipalities and watershed management authorities. The initial version of the IDF_CC tool, version 1, was publicly released in 2015 [4,26].A second version was made public in August 2017, and since then, new features have been introduced into the current version of the tool, which are described in this paper.The significant improvements of the last version of the tool include: (i) use of the General Extreme Value (GEV) distribution, (ii) a new quantile matching (QM) algorithm for updating IDFs using the GEV distribution; (iii) addition of a new module for developing IDF curves for ungauged locations across Canada and a methodology for updating them under climate change and (iv) new and updated climate models used for the IDF projections. The manuscript is organized as follows: Section 2 describes the methodology implemented for the calculation of IDF curves under climate change; the description of the IDF_CC implementation follows in Section 3 and Section 4 provides a detailed guide for IDF_CC tool use the discussion of the results and uncertainty-and provides some conclusions, in Section 6, based on the up-to-date public use of the tool. Methodology This section presents the methodology development and use with the IDF_CC tool version 4 and changes made to the tool since the publication of version 1.We briefly describe version 1 of the tool, and then the changes introduced to versions 2, 3 and 4.These changes include: (1) use of the GEV distribution; (2) additional climate data with new climate projections; and (3) new gridded IDF curves dataset.With use of the GEV distribution and a new dataset for ungauged IDF curves (gridded dataset), the algorithms for updating IDF curves under climate change had to be modified and adapted from the original version 1.These methods are described in Section 2.3 and Section 2.4. IDF_CC Tool The IDF_CC tool version 1 was made public in March 2015 and has since been used by a large number of practitioners, consultants, and municipal engineers [26].IDF_CC tool version 1 used an original equidistant quantile-matching (EQM) downscaling method for updating IDF curves [4,20].The goal was to mimic changes between the projected time period and the baseline period from climate models.The methodology used the Gumbel distribution fitted to the annual maximum precipitation (AMP) series of the observed data and the series extracted from the GCMs using the method of moments to estimate the parameters.The quantile-mapping functions are directly applied to annual maximum precipitation (AMP) to establish statistical relationships between the AMPs of GCM generated precipitation data and sub-daily observed (historical) data rather than using complete daily precipitation records.Details of the implementation and use of the tool are detailed in [4] and [26].Version 1 of the tool has been used to estimate changes in IDF curves for Canada and produce interpolated gridded maps [29].Another study [30] details an analysis comparing the IDF_CC future IDF projections to the theoretical Clausius-Clapeyron scaling that estimates a constant rate of increase in short-duration precipitation, based on temperature increase. Since version 1 was released, the IDF tool was improved and several new features were added and are described in more detail in the following sections: • Implementation of the General Extreme Value (GEV) distribution with the L-moments methods (described in Section 2.2). • Introduction of the module for development of IDF curves for ungauged locations (described in Section 2.3). • Update of the database with the stations from the Environment and Climate Change Canada (ECCC) IDF engineering dataset [31]. • Update of the climate models data base with second version of bias corrected climate models from the Pacific Climate Impacts Consortium [32]. Statistical Distributions This section describes the statistical distributions and the estimation of the parameters used in the IDF_CC tool. Gumbel Distributions The Gumbel distribution, usually denoted in its general form by G(x; µ, ) with µ location and , the scale parameter, is used as the standard distribution by ECCC for all precipitation frequency analyses in Canada.The annual extremes quantiles can be expressed as follows: where Qt is the exceedance value, µ and σ are the mean and standard deviation (parameters of the distribution) of the annual extreme series and KT the frequency factor, that can be calculated as follows: where T is return period in years. For the parameter estimation of the Gumbel distribution, ECCC uses and recommends the method of moments technique [33] as it is simple and yields consistent estimators.In the case of the Gumbel distribution, the parameters µ and σ are estimated by moments methods from data series. GEV Distribution The General Extreme Value (GEV) distribution, usually denoted in its general form by GEV(x; µ, α, k) with µ the location, the scale and k the shape parameter of the distribution, is a family of continuous probability distributions that combines the three asymptotic extreme value distributions into one: Gumbel (EV1), Fréchet (EV2) and Weibull (EV3) types.GEV uses three parameters: location, scale and shape.The shape parameter is derived from skewness, as it represents where most of the data lies, which creates the tail(s) of the distribution.A value of shape parameter k = 0 indicates an EV1 distribution.A value of k > 0, indicates EV2 (Fréchet), and k < 0 indicates the EV3 distribution (Weibull).The Fréchet type has a longer upper tail than the Gumbel distribution and the Weibull type has a shorter tail [34][35][36][37].For this reason, the GEV distribution can potentially provide a better fit to the precipitation data than the 2-parameter Gumbel distribution [35,[38][39][40][41]. with µ the location, the scale and k the shape parameter of the distribution, with µ, and k ∈ ℝ and ≥ 0. The inverse distribution function or quantile function is given by (5) for k ≠ 0 and (6 for k = 0, and the other parameters as described above.() = + { 1 −(−) }/for ≠ 0 (5) Parameter estimation with the L-moments method The L-moments [28,42] and maximum-likelihood methods are commonly used to estimate the parameters of the GEV distribution and fit to annual maxima series.L-moments are a modification of the probability-weighted moments (PWMs), as they use the PWMs to calculate parameters that are easier to interpret.The PWMs can be used in the calculation of parameters for statistical distributions [34,36].L-moments is a robust alternative to moments of the distributions and is a linear combination of the order statistics of the annual maximum rainfall amounts [34,35,43].The PWMs are estimated by ((7) to ( 9)): where xj is the ordered sample of annual maximum series (AMP) and bi are the first PWMs.The sample L-moments can then obtained as (Equations ( 10)-( 12)): The GEV parameters: location (µ), scale () and shape (k) are defined ( [34]) as (Equations ( 13)-( 15)): = 7.8590 + 2.9554 2 where: where Γ(. ) is the gamma function, ℓ 1 , ℓ 2 and ℓ 3 are the three first L-moments, µ is the location, is the scale and k is the shape parameter of the GEV distribution. IDF Curves for Ungauged Locations One important addition that has been made in the latest version of the IDF_CC tool, and described in this manuscript, is gridded IDF estimates across Canada.The intention in including this dataset is to allow development of IDF curves at ungauged locations across Canada by users of the tool.The methodology involves making preliminary estimates of IDF curves from atmospheric variables (AVs) that shape precipitation extremes in different parts of Canada, and then performing a bias-correction function to correct for spatial errors.A summary of the methodology is provided in the following five steps.For a detailed description of the methodology and specific analysis of the created ungauged dataset, the reader is directed to [44]. Step 1: Preparation of predictors Daily time-series of AVs are extracted for all grids located within Canada for the period 1979-2013 from North American Regional Reanalysis (NARR) [45] and ERA-Interim [46] databases.Extracted time-series are used to calculate annual mean and maximum AV values to obtain an array of 31 predictors at all reanalysis grid-points.These are used later in step 4 for the prediction of preliminary IDF estimates.The calculated predictor variables are bi-linearly interpolated to obtain predictor values at all precipitation gauging station locations.These are used in steps 2 and 3 to identify relevant AVs and to calibrate machine learning algorithms at each precipitation gauging station location. Step 2: Identification of relevant AVs at precipitation gauging station locations The most relevant AVs out of the 31 potential AVs are identified at precipitation gauging stations with at least 10 years of observational data.Individual sets of relevant AVs are obtained for precipitation extremes of different durations.Since annual mean precipitation (P-mean) has been identified as an important predictor when modelling precipitation extremes [47,48], it is considered as a 'reference' predictor in this study.This means that P-mean is considered as one of the relevant predictors at all precipitation gauging stations. The relevance of other AVs toward shaping AMP magnitudes is evaluated at each precipitation gauging station by performing a chi-squared test and correlation analysis.Chi-squared tests are performed to compare two nested linear regression models of observed AMP magnitudes: (1) model with only a 'reference' predictor, and (2) model with a 'reference' and a 'test' predictor.It is ascertained whether the inclusion of the 'test' predictor variable leads to a statistically significant improvement (at p = 0.05) in the definition of model 1 or not.AVs resulting in a statistically significant improvement in regression model definition are also identified as relevant predictor variables.In addition, correlations between AMP and different AVs and extreme precipitation magnitudes are calculated and highly correlated AVs are also considered for modelling AMP magnitudes. Step 3: Calibration of machine learning (ML) models at precipitation gauging stations Next, ML models describing AMP magnitudes as a function of identified relevant AVs are calibrated at each precipitation gauging station, for each of the sub-daily durations.To minimize the risk of obtaining unstable regression relationships at stations with short data lengths, observational and AV data from neighboring stations falling within a pooling extent are pooled when forming a relationship between AMP and relevant AVs.In this study, two pooling extents encompassing 10 and 25 closest stations surrounding the gauging station of interest are considered for analysis.One machine learning algorithm, SVM (support vector machines) [49], is used to define the relationship between predictant and predictor variables.The kernlab package [50] in the R programming language.The Sequential Minimal Optimization procedure [51] is chosen as the optimization procedure for estimating SVM regression parameters.The results produced by this algorithm in R are then incorporated in the IDF_CC tool, as described in Section 3.1. Step 4: Prediction of preliminary IDF estimates at reanalysis grids Prediction of preliminary IDF estimates for a particular reanalysis grid is made by using a calibrated ML model from the nearest precipitation gauging station and time-series of predictors associated with the reanalysis grid as calculated in step 1.This process is repeated for all reanalysis grids and precipitation durations to obtain gridded AMP estimates across Canada.Obtained AMP estimates are fitted to a Generalized Extreme Value (GEV) distribution and precipitation intensities corresponding to 2-, 5-, 10-, 25-, 50-, and 100-year return periods are estimated. Step where, subscripts obs and mod denote observed and modelled data, respectively.Correction factors calculated at each precipitation gauging station are bilinearly interpolated to obtain gridded correction factors for all reanalysis grids located within Canada.Correction factors obtained for reanalysis grids are multiplied with preliminary IDF estimates to obtain final gridded IDF estimates. The spatial distribution of the correction factors indicates a higher accuracy of preliminary estimates in both eastern and western coastal regions of Canada, south-western Ontario, and northern Quebec regions.Relatively lower accuracy in preliminary estimates is obtained for northern Ontario, prairies and the majority of the northern regions of Canada. IDF Curves under Changing Climate The main assumption in the process of developing IDF curves is that the historical series are stationary and, therefore, can be used to represent future extreme conditions.This assumption is not valid under rapidly changing conditions, and therefore, IDF curves that rely only on historical observations will misrepresent future conditions [52,53].Global Climate Models (GCMs) are one of the best ways to explicitly address changing climate conditions for future periods (i.e., non-stationary conditions).GCMs simulate atmospheric patterns on larger spatial grid scales (usually greater than 100 km) and are, therefore, unable to represent the regional scale dynamics accurately.In contrast, regional climate models (RCMs) are developed to incorporate the local-scale effects and use smaller grid scales, usually 10 to 50 km or even less.The major shortcoming of RCMs is the computational intensity required to generate realizations for various atmospheric forcings. Both GCMs and RCMs have larger spatial scales than the size of most watersheds, which is the relevant scale for IDF curves.Downscaling is one of the techniques to link GCM/RCM grid scales and local study areas for the development of IDF curves under changing climate conditions.Downscaling approaches can be broadly classified as either dynamic or statistical.The dynamic downscaling procedure is based on limited area models or uses higher resolution GCM/RCM models to simulate local conditions, whereas statistical downscaling procedures are based on transfer functions which relate GCM outputs with the local study areas; that is, a mathematical relationship is developed between GCM outputs and historically observed data for the time period of observations.Statistical downscaling procedures are used more widely than dynamic models because of their lower computational requirements and availability of GCM outputs for a wider range of emission scenarios. The IDF_CC tool adopts a modified version of the equidistant quantile-matching (EQM) method [20] for temporal downscaling of precipitation data which can capture the distribution of changes between the projected time period and the baseline.Future projections are incorporated by using the concept of quantile delta mapping [54][55][56], also known as scaling.For spatial downscaling, the tool utilizes data from GCMs produced for Coupled Model Intercomparison Project Phase 5 (CMIP5) [50] and statistically downscaled daily Canada-wide climate scenarios, at a gridded resolution of 300 arcseconds (0.0833 degrees, or roughly 10 km) for the simulated period of 1950-2100 [32].Spatially and temporally downscaled information is used for updating IDF curves. Equidistant Quantile Matching Method with GEV The IDF_CC tool uses an equidistant quantile matching (EQM) method to update IDF curves under changing climate conditions by temporally downscaling precipitation data to explicitly capture the changes in the GCM data between the baseline period and a future period.The flow chart of the EQM methodology is shown in Figure 1.CDF of the fitted probability GEV distribution and F −1 the inverse CDF.The steps involved in the algorithm are as follows: 1. Extract sub-daily maximums ,,ℎ from the observed data at a given location (i.e., maximums of 5, 10, 15 minutes, 1, 2, 6, 12, 24 hours precipitation data).2. Extract daily maximums for the historical baseline period from the selected GCMs, ,ℎ .3. Fit the GEV probability distribution to maxima series extracted in (i) for each sub-daily duration, ,,ℎ , and for the GCM series from step (ii), ,ℎ .4. Based on sampling technique proposed by [22], generate random numbers for non-exceedance probability in the [0, 1] range.The quantiles extracted from the GEV fitted to each pair ,,ℎ and ,ℎ are equated to establish a statistical relationship in the following form: where ̂,,ℎ corresponds to the AMP quantiles at the station scale and , , and , are the adjusted coefficients of the equation for each sub-daily duration j.A Differential Evolution (DE) optimization algorithm is used to fit the coefficients , , and . 5. Extract daily maximums from the RCP Scenarios used in the IDF_CC tool (i.e., RCP 2.6, RCP 4.5, RCP 8.5) for the selected GCM model, , .6. Fit the GEV probability distribution to the daily maximums from the GCM model for each of the future scenarios , .7. For each projected future precipitation series , , calculate the non-exceedance probability , from the fitted GEV , .Find the corresponding quantile ( ̂,ℎ ) at the GCM historical baseline by entering the value of , in the inverse CDF ,ℎ −1 .This is a scaling step introduced to incorporate the future projections in the updated IDF and uses the concepts of quantile delta mapping [54,56].The relative change ∆ , is calculated using Equation ( 20): ̂,ℎ = ,ℎ −1 ( , ) 8. To generate the projected future maximum sub-daily series at the station scale ( ,,ℎ ), use (17) by replacing ,ℎ to ̂,ℎ and multiplying by the relative change ∆ from Equation (20). 9. Generate IDF curves for the future sub-daily data and compare the same with the historically observed IDF curves to observe the change in intensities. Spatial Interpolation of GCM Data GCM spatial grid size scales are too coarse for application in updating IDF curves, and usually range above 1.5° × 1.5°.Therefore, GCM data has to be spatially interpolated for station coordinates for use in downscaling.The inverse square distance weighting method is applied in the IDF_CC tool.The nearest four GCM grid points to the station are used by weighting the precipitation value by the distance between the station and the GCM grid points.In this way, the GCM grid points that are closer to the station are weighted more than the grid points farther away.The mathematical expression for the inverse square distance weighting method is given as follows: where di is the distance between the ith GCM grid point and the station, and k is the number of nearest grid points (equal to 4 in the IDF_CC tool). Updating IDFs for Ungauged Locations The updating procedure for ungauged locations adopts a modified version of the equidistant quantile matching (EQM) discussed in [20].Changes in future conditions due to climate change are captured from GCMs by evaluating the magnitude and sign of change, comparing the model's baseline and future periods for each RCP, and then applied to the IDF estimates from the gridded data.The flow chart of the modified EQM methodology is shown in Figure 2. The following discussion presents the modified EQM method for updating the IDF curves for gridded data that is employed by the current version of the IDF_CC tool.The following notation is used in the descriptions of the EQM steps: , stands for the annual maximum precipitation, j is the subscript for 5 min, 10 min, 15 min, 1 h, 2 h, 6 h, 12 h, 24 h sub-daily durations, T the return period (in years), o the observed historical series, h for historical simulation period (baseline for model data), m for model (downscaled GCMs), f is the sub/superscript the future projected series, p is the nonexceedance probability for a given T, F the CDF of the fitted probability GEV distribution and F −1 the inverse CDF.The steps involved in the algorithm are as follows: 10.Extract the IDF curves, representing the historical IDF, from the gridded dataset for all durations (5 min, 10 min, 15 min, 1 h, 2 h, 6 h, 12 h, 24 h) and all return periods (2, 5, 10, 25, 50 and 100 years) , at the selected location.11.Extract daily maximums for the historical baseline period from the selected GCMs, ,ℎ .12. Fit the GEV probability distribution to maxima series extracted for the GCM series in (ii), ,ℎ .13. Extract daily maximums from the RCP Scenarios (i.e., RCP 2.6, RCP 4.5, RCP 8.5) for the selected GCM model, , .14. Fit the GEV probability distribution to the daily maximums from the GCM model for each of the future scenarios , .15.For each projected future precipitation series, calculate the quantiles ( ,, ) using the nonexceedance probability ( )for each T (2, 5, 10, 25, 50 and 100 years) from the inverse CDF of the fitted GEV, , −1 .Similarly, calculate the quantiles ( ̂,,ℎ ) at the GCM historical baseline by entering the value of the non-exceedance probability for each T in the inverse CDF ,ℎ −1 .This is a scaling step introduced to incorporate the future projections in the updated IDF and mimics the concepts of quantile delta mapping [54,56].The relative change ∆ , is calculated using (25), for each T 2, 5, 10, 25, 50 and 100 years. BCCAQ v2 [56] is a hybrid method that combines results from BCCA [57] and quantile mapping (QMAP) [58].This method uses similar spatial aggregation and quantile mapping steps as Bias-Correction Spatial Disaggregation (BCSD) [59][60][61], but obtains spatial information from a linear combination of historical analogues for daily large-scale fields, avoiding the need for monthly aggregates [32].QMAP applies quantile mapping to daily climate model outputs that have been interpolated to the high-resolution grid using the climate imprint method of [62]. IDF_CC Tool Implementation The web-based tool developed for updating IDF curves has the usual components of a Decision Support System (DSS), as presented in Figure 3.The user interface relies on a GIS (geographic information system) tool that is responsible for presenting stations on the map.User information, station data, climate model data and series are stored in the tool's database system.The mathematical models and algorithms assist in the IDF fitting and updating process, as described above.The primary objective of the tool is to automate and facilitate the IDF update procedure using historical observed data collected from rainfall stations and precipitation data from climate model series as the input.The update procedure requires the historical sub-daily annual maximum of observed precipitation data to be provided by the user.In the case of Canada, a repository of stations from Environmental and Climate Change Canada (ECCC-the country's official environmental agency) is pre-loaded and available through the user interface with sub-daily historical records. The IDF_CC tool incorporates three of the most commonly used RCPs with larger available model outputs: 2.6, 4.5 and 8.5.The numbers, (2.6, 4.5 and 8.5) represent the radiative forcing values W/m 2 at year 2100, accommodating a set of anthropogenic emissions, detailed in chapter 8 of IPCC AR5 [63].More details about the energy balance models and the temperature changes projected and the correlation to each of the RCPs can be found at [64]. Based on the precipitation series, either provided by the user or from official sources, the IDF curve is first fitted to observed historical data by using Gumbel and GEV distribution.With the IDF fitted, possible changes for the future are calculated from the selected GCM model using the EQM method.Results are presented in the form of tables and interactive graphs.As mentioned, GCM models for IPCC AR 5 [17] provide scenarios for the future (RCPs), and each RCP usually has several different runs. For this reason, a range of possible future IDFs is generated with the application of the EQM method.Results for future IDF curves are available as median, and a range representing outputs based on each available RCP in the form of tables and interactive graphs.Output uncertainty is associated with different climate projects and runs available for each GCM and RCP.The IDF_CC tool was designed in the form of a decision support system (DSS) to generate local IDF curve information that accounts for the impacts of climate change.The following section describes the components of the tool as implemented.For details of tool implementation and use, the user can consult the IDF_CC tool Technical and User's Manual [65,66]. IDF_CC Tool Components This section describes the three major system components of the IDF_CC tool: (i) the user interface (UI); ii) the model base, and iii) the database and climate data models repository.The IDF_CC tool is implemented in three distinct logical layer components, as presented in Figure 4: the first layer is the user interface, the second is the model base and the third is the database and netCDF file repository. User Interface The user interface provides for communication between the user and the other two DSS components: models and database.Three major parts of the user interface are: (1) Leaflet API: the GIS component responsible for map operations; (2) data manipulation: functionalities that allow users to manipulate stations and data; (3) visualization of the results: functionalities dedicated to the presentation of results to the user (tables, equations, interactive graphs).The GIS tool allows switching between several different background maps and has the common GIS functionalities such as zoom and pan (Figure 5).The data input functions are built using Excel-like spreadsheets with copy and paste functionalities.These characteristics facilitate the manipulation of the rainfall datasets that can be easily imported and exported from Excel spreadsheets and text files.The results are visualized through a user-friendly and interactive graphical presentation of the IDF curves and equations. Figure 5 also presents the main menu options that allow the user to access the two main modules of the IDF_CC tool: IDFs for ungauged locations and the IDFs for gauged locations. Model Base The mathematical models provide support for the calculations required to develop the IDF curves based on the historical data and to incorporate climate model data to project the updated curves for the future.The list of algorithms included in the model base of the IDF_CC tool are: • Statistical analysis algorithm is applied to fit the selected theoretical distribution to both historical (Gumbel and GEV) and future precipitation data (GEV) using the method of moments to estimate the parameters of the Gumbel distribution and L-moments method for the GEV. • An optimization algorithm using a differential evolution (DE) optimization algorithm introduced by [67] is used on the equidistant quantile matching.The DE algorithm is used to find the coefficients of the equation to establish a statistical relationship between historical observed data and the model's baseline, as described in Section 2.4.1.The optimization algorithm is also used to fit the analytical relationships in the IDF curves.For each return period (T), an equation is fitted by finding the coefficients of the IDF equation through minimization of the sum of the root square errors between the IDF curve and equation calculated values. • The updated equidistant quantile matching (EQM) algorithm is applied to the IDF curve updating procedure.This algorithm combines historical precipitation data with data from the climate data to develop the IDF for future periods (Section 2.4.1). Database The database stores user data, information related to stations and their data, and information from GCMs.The database management system (DBMS) used for the tool's database is the latest version of Microsoft SQL Server™ (MSSQL).Data are organized into relational tables to model aspects of reality, such as the availability of stations, their location and precipitation series, to support the calculation of IDF curves by the mathematical models.Besides tables, other essential DBMS features used by the tool include: (1) Views, which allow the combination of several tables in a relational way and return aggregated data to the user interface; and (2) Store procedures, which include functions that provide great flexibility for developers, and are used to insert and recover data very efficiently from the database with less computational burden.The following information is stored in the database: • Repository of ECCC IDF curves dataset: the IDF_CC tool's database stores the latest records from the hydro-meteorological station information available from ECCC stations across the country.There are approximately 700 stations throughout the country.Only publicly available data from the ECCC stations are stored in the tool's database, including station name, location, coordinates, station ID, sub-daily AMP records and daily precipitation data. • Dataset of gridded IDF curves for the ungauged location module.The dataset is stored in the database of the IDF_CC tool as another physical table in the database associating the coordinates with the estimates of the IDF using the methodology described in Section 2.3. • Climate projections in the form of Global Climate Models (GCM) output files are converted from the netCDF format to an MSSQL database structure created for the IDF_CC tool that is more efficient for use with the tool's algorithms.The GCM data is available in a gridded format.For each grid point, precipitation series are available.These points cover the globe and are represented by a pair of coordinates (longitude and latitude).The database structure was created in order to allow the grid points to be stored with geographic information and the associated series in tabular form.The selection of the grid points from the GCMs and associated series is made with the use of nearest neighbor query available in MSSQL, which adds to the tool's IDF updating procedure efficiency.• Some user information is required to access the IDF_CC tool's functionalities, and the user must create an account and provide data that are stored in the database, including their name, email, institution/municipality, the intent of use and password. • User-provided stations and data: any registered user of IDF_CC can create stations and provide data for them.The type of data and input options are discussed in Section 4 of the paper.User-created stations can be shared among other users registered with the IDF_CC tool.Stations created by users will contain the same basic data as EC stations, including name, ID, coordinates and location.The coordinates will allow the tool to plot the station on the map with different colors for easier identification.Users are allowed to provide data for their station by including pre-processed sub-daily annual maximum precipitation (AMP) series or raw for-the-daymaximums series.The tool can identify the type of data provided and process the IDF curves calculation accordingly.There are several sub-daily durations that the user can choose from 5, 10, 15, 20 and 30 min, 1, 2, 3, 6, 12, 18 and 24 h. • Users can upload files that are related to a specific station.The files are also stored in the database and can be either text documents, spreadsheets and/or pdf files. Data from the climate models (raw IPCC and bias corrected PCIC models) stored in the database require up to 80 gigabytes of storage space.Data from hydro-meteorological stations and miscellaneous files associated with the ECCC, which is much less demanding, take up to 700 megabytes of server space. IDF_CC Tool-Technical Implementation Details The tool is a web-based DSS without the need for installation files and is not operating system dependent.It was built for compatibility with major web-browsers and is mobile friendly.The primary scientific and technical challenges associated with developing the IDF_CC tool are (i) to create a computationally efficient method for downscaling GCM data and updating IDF curves, and (ii) addressing complexity associated with large output files procedures by GCMs.The former was addressed by the implementation of the Equidistant Quantile Matching algorithm (EQM) [20,65] and the second, by converting climate models output series from the netCDF format files into an MSSQL database integrated with the tool.The database that stores climate model data was fine-tuned to provide the necessary data series for the tool's mathematical models very efficiently.As a result, the updating procedures require only seconds, even when the GCM ensemble option-which includes all models-is selected. The mathematical models and functions of the tool were written using the objected oriented C# language, which is part of the Microsoft.Net Framework™.This programming language provides the required features to implement efficiently the optimization and the EQM algorithms and all other codes used by the tool.The user interface is based on the rich combination of technologies: Microsoft ASP.Net, HMTL5 (HyperText Markup Language version 5), CCS3 (Cascading Style Sheets, version 3), JavaScript, jQuery framework and Leaflet API for the GIS functionalities. Use of IDF_CC Tool and Results The IDF_CC tool provides precipitation accumulation depths for a variety of return periods (2, 5, 10, 25, 50 and 100 years) and durations (5, 10, 15 and 30 min and 1, 2, 6, 12 and 24 h), and allows users to generate IDF curve information based on historical data (for gauged or ungauged locations) and future climate conditions that can inform infrastructure management decisions.The users can select from multiple future greenhouse gas concentration scenarios (RCPs) and apply results from a selection of 24 raw climate models [50] and 24 bias-corrected models from PCIC [26] and the ensemble combining all models, that simulate various climate conditions at the local scale.The procedure for use of IDF_CC tool is illustrated in Figure 6. IDF Curves for Gauged Locations The tool's database stores the data for the hydro-meteorological station available from ECCC.There are approximately 700 stations publicly available for the country and roughly 500 of these have at least 10 years of observation data (the minimum length required to generate reliable IDF curves using the IDF_CC tool). This section provides a detailed description of the main steps for IDF_CC tool use illustrated with the tool's interface screen captures.The intention here is to document the process for using the tool and assist the reader in starting to use the publicly available web-based tool. After creating a user account and logging in, the IDF_CC tool allows users to select their location of interest by zooming in on the map, as shown in Figure 7.Alternatively, users may search for, and select, a local ECCC hydro-meteorological station using a text search box (Figure 7).Users have the option of selecting one of the 700 pre-loaded ECCC hydro-meteorological stations and creating and entering data for their own "user created" stations.Users are able to view IDF curves based on the historical records for pre-loaded and user created stations using the tool in both table (Figure 8) and plot formats (Figure 9).Users can also view interpolation equations (Figure 10) used for generating IDF curves based on historical data (from ECCC) or user entered rain station data.The return period is noted as T in the screenshots presented in the next figures. IDF Curves for the Ungauged Locations The current version of the IDF_CC tool incorporates a module with a dataset of ungauged location IDF curves covering the entire country.The input from the user for this module is the selection of a location on the map, or a pair of coordinates, as presented in Figure 11.Based on the coordinates selected by the user, the IDF_CC tool will extract the nearest grid points from the ungauged dataset, calculate the IDF curve for the selected location, and display the result as in Use of IDF_CC Tool for Developing IDF Curves for Future Conditions The IDF_CC tool can be used to produce updated IDF curves using (i) the gauged location module with preloaded data from ECCC, or (ii) data from its own sources, or (iii) the dataset available for ungauged locations, as illustrated in Figure 6. By selecting the "IDF under climate change" tab the users can generate IDF curves that account for future climate conditions in both modules.To generate the updated IDF curves for future climate, the user can select from 24 raw (CMIP5) and 24 bias corrected (PCIC) GCMs, all GCMs (ensemble option) or an individual model and the projection period (any minimum 30-year period between 2006 and 2100-Figure 13).The models available within the IDF_CC tool are listed in Appendix A. The minimum 50 year period was selected based on the experiments with data from the climate models and studies [68][69][70] that present extensive analysis of the effect of short data series on the parameters of the GEV distribution. The steps to produce updated IDF curves can be summarized as follows: (i) select an existing or created station (gauged module) or a location on the map (ungauged module); (ii) calculate the historical IDFs that can be used for comparison with the IDFs for future climatic conditions; and (iii) select the GCM model and projection period, and generate the IDF curves for future climatic conditions.The results for each GCM model are automatically provided for three future emission scenarios (RCP2.6,RCP4.5 and RCP8.5).Outputs for IDF curves based on future climate scenarios are provided in tabular and graphical formats, as shown in Figures 7-9.Tables and graphs are automatically generated for each of the three available RCPs (2.6, 4.5 and 8.5).Results are provided for 5 min to 24 h durations, and for 1 in 2-to 1 in 100-year return periods (Figure 13).Further, a comparison graph can be generated to quickly assess the impact of different RCPs on outputs for a particular station (Figures 14 and 15).All the results, including plots and tables, can be exported for use outside of the tool.Users also have the option of exporting future IDF results in CSV file format for analysis.Exported future IDF results contain outputs reflecting the user's selection of the climate models and projection period. Sensitivity Analysis and Comparison of Results To validate the results from the presented methodology, a brief comparison between projections of the bias-corrected models and literature values from previously published material for the London CS station is conducted.The projections that the IDF_CC tool produces cannot be compared directly with other sources, given the innovative and unique nature of our work.The comparison of results also depends on the choice of selected parameters, such as the climate model, the period of analysis, and a representative pathway (RCP) selected.As an example, Table 1 presents the projected future IDF curve (the 50th of the multi-model ensemble) of the London CS station for two selected return periods of 50 and 100 years and several durations, using all 24 bias-corrected (Tables A1) climate and the 24 raw (Tables A2) climate models for RCP 8.5 and the late 2100 century period (2071-2100).The projections are calculated for the baseline values as presented (also indicated in Table 1).The range of projected increase in IDF values is from 25.3% to 30.1% for the 50-year return period and 27.0% to 30.2% for the 100-year return period of the bias-corrected models, while for the raw climate models, the projected range is 25.9% to 33.9% and 25.0% to 33.5%, respectively.The results show a close agreement with the dataset of climate models in the IDF_CC tool.These results (magnitude and direction of the projections) are in line with previous studies conducted for the same station [16,23,66].The projections are compared with the MTO IDF Curves Finder tool (MTO, 2020).Once more, it is crucial to note that no direct comparison should be drawn since the methodology of the IDF_CC tool is very different from the methodology of the MTO tool.The IDF_CC tool makes use of a large number of projections (climate models).The MTO tool uses linear trend analysis to extrapolate the values from the baseline (historical period) to obtain future projects, and no direct information from the climate models is incorporated.Table 2 presents the values for 50-and 100-year return periods for several sub-daily durations for both tools.The projections from the MTO tool were taken for the year 2085 (representing roughly the late century), and the projections from the IDF_CC tool represent the 50th percentile (median) from the output from the ensemble of the two climate models available for the late century (2071-2100) [71].The results obtained from the IDF_CC tool once more show consistency in the direction and the magnitude of the projections compared to the MTO tool.A more important analysis, however, is the discussion of uncertainty presented in the next sub-section.One of the major challenges associated with use of the IDF_CC tool is addressing and describing uncertainty associated with climate modeling.The projections provided by different models are highly uncertain due to complex processes driving precipitation and the various ways of modeling these processes.The high number of projections is available by the IDF_CC tool, combining the two climate datasets and three future RCPs (2.6, 4.5 and 8.5)-a total of 144 projections.They create a robust set of outputs available to the users.Additionally, the IDF_CC tool's flexible architecture offers users with the opportunity to apply an ensemble of GCMs, one GCM, bias-corrected GCM outputs, and/or raw climate models.To illustrate the level of uncertainty associated with various choices, an additional feature is available within the IDF_CC tool-presentation of boxplots generated from running all available GCMs, for each emission scenario using all available model experiments (runs).Figure 16 provides an example of a box plot for the 5-year return period IDF curve for the London CS station (located in Ontario, Canada) for RCP 8.5, and all 24 raw climate models.Using the outputs from the IDF_CC tool, the uncertainty graphs can be presented in another format, as shown in Figure 17, where the shaded area presents the range of all possible IDFs for the selected climate models.It is important to highlight that all the IDF projections are equally likely, given that the climate models are built using the state-of-the-art knowledge in the field of meteorology. Conclusions The process of updating and incorporating climate change impacts into local IDF curves is highly technical and data-intense.The lack of relevant climate change impact information at the watershed and municipal level has been noted as a challenge that is difficult to overcome in many institutions responsible for decision making, including those with very high adaptive capacity. Many of the current water infrastructure systems have not been designed to accommodate extreme rainfall events, and increasing urbanization is creating more impervious areas resulting in larger runoffs.Inadequate infrastructure investment and maintenance further aggravates the exposure of urban communities to flooding.Rainfall intensity-duration-frequency (IDF) curves are used for many water management applications in Canada, including planning, design, operation and maintenance of stormwater management systems, wastewater systems, stormwater detention ponds, culverts, bridges, dams, pumping stations, roads and master drainage planning. The IDF_CC tool uses a sophisticated and efficient IDF curve updating methodology that incorporates changes in the modeled characteristics of GCMs between the baseline and the future projections.The mathematical models and procedures used within the IDF_CC tool include: (i) spatial interpolation of GCM data using the inverse distance method; (ii) statistical analyses algorithms, which include fitting Gumbel and GEV probability distribution functions using method of moments and method of L-moments, respectively; (iii) an IDF updating algorithm based on the EQM method. The tool is designed to allow water managers, municipal infrastructure professionals, provincial and federal government agencies, researchers, consultants and non-profit groups to quickly develop estimates related to the impact of climate change on IDF curves for any location in Canada, using the gauged or un-gauged modules available within the tool. The tool is continuously developing, and improvements are frequently being introduced.As a next step, the tool developers are planning to introduce model outputs generated by the new SSP (Shared Socio-Economical pathways) emissions scenarios, created for CMIP6, into the IDF_CC tool. 5 : Correction of spatial errors In the final step, the estimated preliminary IDF magnitudes are bilinearly interpolated at precipitation gauging station locations and used in conjunction with observation-based IDF magnitudes to obtain correction factors at each precipitation gauging station location.Different sets of correction factors are calculated for IDFs of different durations and return periods.Correction factor ,, d f s C obtained at a gauging station s, for a precipitation event of duration d, and frequency f is calculated as: ,, = ,,, ,,, Figure 2 . Figure 2. Modified Equidistance Quantile-Matching method for generating future IDF curves under climate change for gridded data. Figure 3 . Figure 3. Decision Support System architecture of the IDF_CC tool and illustration of its support of the decision-making process. Figure 4 . Figure 4. Elements of the layer architecture of the IDF_CC tool. Figure 12 . This curve represents the historical (or observed) period.Development of updated IDFs due to climate change is presented in the next section.The locations created are saved to the user's account for later use and deleted by right clicking on the markers.The tool limits the creation of the stations within Canada only. Figure 11 . Figure 11.Various ungauged locations selected by the user. Figure 12 . Figure 12.IDF table for an ungagged location for a given pair of coordinates. Figure 13 . Figure 13.Screen for selection of the Global Climate Models (GCM) model and time period. Figure 14 . Figure 14.Updated IDF curves for Representative Concentration pathway (RCP) 2.6 and ensemble of all raw GCMs. Figure 15 . Figure 15.Comparison graph to assess the impact of different RCPs. Figure 16 . Figure 16.A Box plot of the projected range for the 5-year IDF curve for London station and RCP 8.5 using the 24 raw climate models available within the tool. Figure 17 . Figure 17.Projected range of the 5-year IDF curve for London station combining all models, runs and RCPs. Table 1 . Projection (%) for London CS station, for RCP 8.5, period 2071-2100 using bias-corrected models, using the 50th of the multi-model ensemble.
2020-04-30T09:11:22.788Z
2020-04-27T00:00:00.000
{ "year": 2020, "sha1": "41af98bdf0448adf949a87f4e026068c36cbeeb1", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4441/12/5/1243/pdf?version=1588933440", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "4315c6e58e06859dbe16577359971157b35f64ef", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Computer Science" ] }
204049345
pes2o/s2orc
v3-fos-license
Cost-benefit analysis of VKA versus NOAC treatment in German patients with atrial fibrillation utilizing patient self-testing Background: Clinical complications of long-term anticoagulation in patients with atrial fibrillation cause significant morbidity and have a substantial economic impact on the healthcare system. Objective: To assess the cost-benefit by implementing patient self-testing (PST) in German patients anticoagulated with vitamin K antagonists (VKA) compared to treatment with the new oral anticoagulant drugs (NOAC) apixaban, dabigatran, edoxaban, and rivaroxaban. Methods: A deterministic decision-analytic model was developed simulating the number of major bleedings, ischemic strokes, and hemorrhagic strokes and their associated costs by utilizing PST compared to those of treatment with NOAC. Data on the rates of these adverse events in both groups during the 1st year of treatment was taken from the NOAC approval studies. Direct costs were evaluated from the perspective of the Statutory Health Insurance (SHI) considering the use of resources directly related to PST testing and costs incurred by hospital treatment of the adverse events. Univariate sensitivity analysis was performed to examine the extent to which our calculations were affected by varying the parameters considered in our model within plausible extremes. To capture the interactions between multiple inputs, we also provided a probabilistic sensitivity analysis (PSA). Results: When achieving an average time in therapeutic range (TTR) of 78%, implementing PST in VKA patients reduces cost per patient compared to NOAC treatment between €603.38 [USD 681.52] (edoxaban) and €762.64 [USD 861.40] (rivaroxaban) during the 1-year observation period. In line with the TTR increase, the initially higher number of adverse events per VKA patient compared to NOAC-treated patients in the approval studies becomes largely aligned; the difference in associated hospital costs per patient in the NOAC groups is then only €1.03 [USD 1.16] (in favor of dabigatran), €23.41 [USD 26.44] (in favor of apixaban), €0.53 [USD 0.60] (in favor of edoxaban) and €52.62 [USD 59.43] (in favor of VKA anticoagulation in the rivaroxaban group). In PSA, implementation of self-management results on average in a cost saving between €619.20 [USD 699.39] and €785.24 [USD 886.93] per VKA patient in favor of the SHI. Under all reasonable assumptions, PST remains constantly less expensive irrespective of which NOAC is administered. Conclusion: Implementing PST in German VKA patients may significantly reduce SHI expenditures compared to utilizing NOAC. INTRODUCTION Atrial fibrillation (AF) increases the risk of stroke by a factor of 4-5 and accounts for almost 15% of all ischemic strokes. 1 One in four middle-aged adults in Europe and the US is expected to develop AF, and by 2030, up to 7 million AF patients are anticipated in the European Union. 2 Several studies have demonstrated that the risk of stroke is reduced by oral anticoagulant therapy with vitamin K antagonists (VKA), especially warfarin and phenprocoumon, which were the only oral anticoagulants available until a few years ago for primary and secondary prevention of thromboembolic events. Currently, the non-vitamin K oral anticoagulants (NOAC) dabigatran, rivaroxaban, apixaban, and edoxaban are approved as potential alternatives of VKA treatment. [3][4][5][6][7] As some metaanalyses suggest superiority of NOAC treatment versus VKA with respect to reduction of thromboembolisms and bleeding complications, the use of NOAC is recommended in the 2016 European Society of Cardiology (ESC) guidelines as first line therapy for anticoagulation in atrial fibrillation. 1 However, one major point of criticism as stated by the Drug Commission of the German Medical Association 8 is the unexpected short mean "time in therapeutic range" (TTR) of the International Normalized Ratio (INR), a standardized value to measure the required prolongation of prothrombin time, in the warfarin control groups of all NOAC approval studies. [3][4][5][6][7] Thus, the higher rates of adverse vascular events in VKA patients investigated in those studies may partly be explained by the low mean TTR ranging between 55% (rivaroxaban) and 66% (edoxaban). In contrast, those VKA patients who utilize patient self-management (PST) during their VKA treatment usually have a significantly higher TTR of about 78%. 9 In a cohort study of German AF patients using claim data of the most representative German SHI, the "Allgemeine Ortskrankenkassen" (AOK Health Insurance Fund) NOAC exposure was associated with significantly higher incidence rate ratios for death or non-specified strokes, myocardial infarction and severe bleeding suggesting that NOAC therapy doesn't seem to be more effective and safer than a VKA therapy in "real life." 10 In a recently published Danish study 11 on the treatment of AF patients, self-managed VKA treatment was associated with a significantly lower risk of all-cause and ischemic strokes compared to treatment with NOAC, whereas no significant differences were observed for major bleeding and mortality. Indeed, there is evidence [12][13][14] that an increase of TTR in patients under VKA therapy by patient selftesting (PST) is associated with a lower frequency of the most severe three adverse events, namely ischemic and hemorrhagic stroke and major bleedings. Based on a large database of 67 077 Veterans Health Administration patients anticoagulated with VKA, Rose et al. 15 simulated the number of adverse events and their associated costs and utilities, both before and after various degrees of improvement in percent time in TTR, following a 2-year time horizon. There was demonstrated improvement in TTR by 10% with a prevented 2087 events, gained 1606 quality-adjusted life-years, and saved $29.7 million from the payer's perspective. Derived from Rose's approach we defined mathematical formulas that allowed us to simulate the effects on the number of adverse events in the warfarin control groups of the original NOAC approval studies. We calculated the number of events that occurred when the original TTR in the studies would now uniformly increase to a mean TTR of 78% 9 ; a figure which has been shown to reflect the reality in PST. In a second step we assessed the costs of NOAC and VKA treatment of AF patients when PST 16 was implemented for the first year under country specific conditions from the perspective of the German SHI. Our aim was to examine the possible economic advantages of implementing PST in VKA patients compared to administering NOAC, when supervised by a properly trained general practitioner (GP). Since in Germany the short-acting warfarin is only rarely prescribed replaced it in our model of long-acting phenprocoumon, which is consumed by 98% of the German VKA patients 17 and allows a higher stability of plasma concentrations. 18 ETHICAL CONSIDERATIONS Ethical approval was not necessary as only publicly available secondary data were used. A) MODEL APPROACH Our model was parametrized by data on the rates per patient and year of three main adverse events in the study populations of the four NOAC approval publications. We assume that a relevant increase in the TTR is possible by PST under the supervision of a GP. Only costs to be carried by the SHI that arise on the legal basis of the German social security statute book (SGB) V are included. These are the costs of medication (NOAC and phenprocoumon), aids (CoaguChek ® system, test strips and lancets, see 16 in accordance with §31 SGB V), costs of training on the proper use of the test system, the costs for outpatient primary care and laboratory costs and the result of hospital stays as a result of adverse events. The frequency of the three adverse events in both comparison groups (patients with NOAC or VKA) reported in the NOAC approval studies 3-7 is standardized to one year due to the different duration of the approval studies in order to allow for comparability. Due to the lack of empirical data, the model assumes that the adverse events occur at the end of the first year of comparison and, due to their severity, require hospitalization. The outpatient costs refer to the National Association of Statutory Health Insurance Physicians' Uniform Assessment JOURNAL OF HEALTH ECONOMICS AND OUTCOMES RESEARCH Standard (EBM), 19 as of January 2019 and -with respect to the reimbursement of medical devices and other aids for performing patient self-management -to the agreement with the National Association of Statutory Health Insurance Funds ("Spitzenverbände der Krankenkassen") of February 2002. Costs arising from hospitalization and from prescribing anticoagulants are based on the GDRG (German diagnosis related groups) system in the most recent version (2019) and the Pharmacy Sales Prices (Red List [Rote Liste ® ] 2019), respectively. Any discounts granted by the manufacturers of NOAC to some insurances are not taken into account for reasons of transparency. B) MODEL STRUCTURE A deterministic, patient-based decision-analytic model was developed, simulating the incremental costs of using one of the NOAC, compared to conventional VKA treatment, with and without PST. The perspective taken is that of the SHI itself (see Figure 1 for an example using apixaban). In this figure, the decision tree simulates the direct costs of the two alternative anticoagulation strategies per patient until the end of the first year for one year. We used TreeAge Software (TreeAge Inc. Williamstown MA, USA) for model building and analysis and examined our inputs over a wide range in sensitivity analyses to identify influential factors that would alter the base-case findings. Univariate sensitivity analysis was performed using all variables to examine the extent to which our calculations are affected by varying selected assumptions. Variation was done using either a) the lower and upper bounds of the parameter if present or b) the lower and upper bounds of a parameter's 95% confidence interval. Where this is not applicable, we vary parameter values by ± 20% of the base-case value according to international practice, unless stated otherwise. Furthermore, in order to capture the interactions between multiple inputs we provide a probabilistic sensitivity analyses (PSA) by assigning an appropriate statistical (probability) distribution to all input parameters, from which values are randomly drawn in a 2nd order Monte-Carlo simulation (n=1000 Input parameters are shown together with their probabilistic distributions in Table 1. MATHEMATICAL MODEL The information from Table 5 of Rose's publication 15 was utilized to calibrate a model that captures the influence of a change (delta, Δ) in TTR (ΔTTR) on the number of ischemic strokes (IS) and major hemorrhages (MH) in relationship to those TTR changes. As MH and hemorrhagic strokes (HS) were not assessed separately in Rose's publication we assumed that the effect of TTR increase on MH and HS would be the same. We standardized the data fromRose et al. 15 to derive a linear dependency structure in a regression framework. Subsequently, we regressed TTR on IS and TTR on MS to derive a dependency structure using robust standard errors, where regression results are reported in Table 2. The estimated coefficients were used to generalize the findings of Rose et al., i.e., we utilize the following relationship to estimate mean relative savings: Relative Savings in IS = 0.0134 + 0.6754 * ΔTTR and Relative Savings in MH = 0.0101 + 1.4282 * ΔTTR Multiplying the relative savings in IS or relative savings in MH, resulting from a particular ΔTTR, with the baseline IS or MH defines the savings in IS and MH related to a particular ΔTTR. To incorporate and compare results from different studies we standardize effects to a patient at risk in the respective study population using the following relationship: Saving IS: Baseline of study (number per study patient at risk in % per year) * (0.0134 + 0.6754 * ΔTTR) Saving MH: Baseline of study (number per study patient at risk in % per year) * (0.0101 + 1.4282 * ΔTTR). EPIDEMIOLOGICAL PARAMETERS (TAKEN FORM THE APPROVAL STUDIES 3-7) The risk of having an ischemic stroke (IS), hemorrhagic stroke (HS) or major bleedings (MB) in the NOAC approval studies are separately shown for the respective NOAC and the warfarin control group in Table 3, together with the mean time in therapeutic range (TTR) of the warfarin group members. ***, **, * indicates significant estimates at the 0.01, 0.05, and 0.1 significance level with T-statistics in parentheses Cost of coagulation self-management The measurement intervals for coagulation selfmanagement are much closer than for measurements exclusively in primary care practice (see below). An autonomous self-adaptation of the phenprocoumon dose requires training, a sufficient stock of test strips and the presence of a (maintenance-free) test device. 1.6.2019). The CoaguChek®-INR device is depreciated over 60 months, as is customary for medical measuring instruments (36 months to 10 years in sensitivity analysis). Laboratory costs in the GP's practice According to the statements in the Federal Gazette, a total of 6 additional medical check-ups are accepted in medical practice (1st quarter in each case monthly, i.e., three times, and one time each in the 2nd up to the 4th quarter. As it is also necessary to check to what extent the INR value obtained with the CoaguChek ® INR system is in accordance with the laboratory value as a reference, it is generally assumed that the blood will be drawn at the GP's practice and subsequently transferred to the collaborative laboratory. The laboratory community will receive €0. 6 Cost of NOAC control According to the proposals of the 2nd edition of the Practical Guide of the European Heart Rhythm Association 23 follow-up intervals are required for NOAC: first check one month after the initial prescription; in addition, then clinical controls ± every 3 months, a maximum of 6 months depending on patient factors such as age, renal function and comorbidities. Nevertheless, in view of the frequent comorbidities in patients with AF, e.g. diabetes mellitus or arterial hypertensions, it would be unrealistic to assume that during the study period of one year NOAC patients would not see their doctor for whole quarters and that type and frequency of routine parameters to be investigated (blood count, liver and kidney values) differ from those of phenprocoumon patients. Therefore, the same costs, as for VKA patients, of €113.40 [USD 128.09] are assumed as payment for the GP per year. The costs for determining those routine parameters are not calculated separately for reasons of insignificance (determining the creatinine value, for example, amounts to only €0.25 [USD 0.28] and performing a blood count to only €0.5 [USD 0.56]) and are not considered in our model. RESULTS Achieving a mean TTR of 78% by implementing patient's self-management (PST) is on average between €603.38 [USD 681.52] and €762.63 [USD 861.39] less costly in the first year per VKA patient, compared to utilizing NOAC (see Table 4 a-b). Assuming that an increase of the original mean TTR as reported in the respective approval studies by approximately 16 percentage points for apixaban, 14 points for Dabigatran, 13 points for edoxaban, and 23 points for rivaroxaban to the target of 78% can be achieved, the number of severe adverse events in VKA patients decreases, and accordingly the associated hospital costs become largely aligned. Thus, after TTR adjustment, the cost difference for those severe events per VKA and NOAC patient in the various groups is then only €1. In contrast, varying the G-DRG costs of hospitalization for treating the ischemic or hemorrhagic stroke or major bleeding as potential adverse events by ± 20% resulted in minor changes of total cost. This is because costs incurred for those adverse events would always change in parallel in both arms of the decision tree. In PSA, i.e. under all reasonable assumptions, the costs of implementing PST for anticoagulation at the expense of the SHI saved on average was between €619.2 [USD 699.39] (edoxaban) and €785.24 [USD 886.93] (rivaroxaban, see Table 5). Here treatment with phenprocoumon remains constantly less expensive than treatment with one of the four NOAC. In contrast, the risk per patient of suffering from one of the three severe adverse events (ischemic or hemorrhagic stroke or major bleeding) under VKA or NOAC is in fact only in the single-digit percent level or lower. Increasing the TTR from the respective mean baseline level of the VKA population in the approval studies (starting from a TTR of 55%, 63% or 65%) to 78%, the realistic target of our model, nearly outweighs the initially lower number of strokes and bleedings in NOAC patients. Accordingly, the remaining differences in the associated costs compared to those incurring for severe advents in the respective NOAC groups are low. Therefore, although the costs of implementing PST, especially the costs of the INR measurement device and the patient mandatory training, in VKA patients are not negligible, NOAC drug costs remain the high cost component in our costbenefit analysis. Probabilistic sensitivity analysis (PSA) that considers realistic assumptions of uncertainty, demonstrates that performing PST is consistently less expensive than a NOAC. Thus, even when comprehensive discount agreements on the pharmacy retails between NOAC manufacturers and individual health insurance organizations are taken into consideration, self-managed VKA treatment can be considered the strategy of choice as far as economic aspects in favor of the German SHI are concerned. Our study has some limitations that must beconsidered when interpreting our results: First, the effects of TTR increases on the number of adverse events and their associated costs are based on a mathematical approach that has been derived from a previous U.S. study without a direct relationship to the NOAC approval studies. Second, in all NOAC approval studies warfarin was used as VKA, however, is rarely prescribed in Germany in favor of phenprocoumon. Thus, in our analysis we set warfarin and phenprocoumon, that have identical costs per tablet in Germany, to the same level of adverse effects reported in the NOAC approval studies. We should note our model may overestimate the warfarin-induced adverse effects as reported in the respective approval studies. This is because warfarin has a shorter half-life than phenprocoumon, that patients using phenprocoumon have more often INR values in therapeutic range than warfarin and thus phenprocoumon seems preferable for use in long-term therapeutic anticoagulation. 24 Third, our calculations refer only to the treatment of strokes and major bleedings immediately after diagnosis in a German hospital. Excluded from the model are costs for aftercare services provided by special rehabilitation centers, or physiotherapeutical services offered in ambulatory outpatient settings. Fourth, the results of the model can only be generalized if all VKA patients are mentally able to participate in PST and are willing to perform the ongoing INR checks, and that all NOAC patients regularly take their drug in the suggested doses. Indeed, a 100% implementation of PST for all patients who actually are anticoagulated with vitamin K antagonist may be a quite unrealistic option. However, in a large prospective cohort study in Switzerland those patients who decided to participate in PST had a high adherence of about 90% during a median followup of 4.3 years. 25 On the other hand, there are concerns whether the results of the double-blinded approval studies of the four NOAC under consideration can also be assumed to be valid in real life: For example, in US-Veterans the adherence of AF patients to Dabigatran was reported to be only 72.2%. 26 As the risk of a 156 Diel R, et al. JOURNAL OF HEALTH ECONOMICS AND OUTCOMES RESEARCH stroke increases by 13% per 10% less adherences (HR 1.13; 95% CI 1.07-1.19), an advantage with respect to adverse events simply by administering NOAC is not guaranteed. 27,28 Thus, to validate our estimates, more cost studies, preferably with a multicenter and prospective study design, are required. CONCLUSION The utilization of PST in anticoagulated German VKA patients with atrial fibrillation is likely to reduce overall costs. As such, routine implementation of PST may have also direct and positive impact on the control of clinical complications, especially stroke and major bleeding rates. Prospective clinical studies should be undertaken to prove our model and to further evaluate its economic advantages in the immediate future.
2019-09-26T08:51:09.651Z
2019-08-07T00:00:00.000
{ "year": 2019, "sha1": "46cdc120cf9edefabdef953ac468994dc0fca5fb", "oa_license": "CCBY", "oa_url": "https://jheor.org/article/9774.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ce8654f775582226421abceafa8d8876227fee1d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
20245754
pes2o/s2orc
v3-fos-license
Microfluidics-Based Approaches to the Isolation of African Trypanosomes African trypanosomes are responsible for significant levels of disease in both humans and animals. The protozoan parasites are free-living flagellates, usually transmitted by arthropod vectors, including the tsetse fly. In the mammalian host they live in the bloodstream and, in the case of human-infectious species, later invade the central nervous system. Diagnosis of the disease requires the positive identification of parasites in the bloodstream. This can be particularly challenging where parasite numbers are low, as is often the case in peripheral blood. Enriching parasites from body fluids is an important part of the diagnostic pathway. As more is learned about the physicochemical properties of trypanosomes, this information can be exploited through use of different microfluidic-based approaches to isolate the parasites from blood or other fluids. Here, we discuss recent advances in the use of microfluidics to separate trypanosomes from blood and to isolate single trypanosomes for analyses including drug screening. Introduction African trypanosomes are protozoan parasites that cause disease in livestock (including Trypanosoma congolense, T. vivax, T. evansi, T. brucei brucei) and humans (T. b. rhodesiense and T. b. gambiense). The parasites are transmitted by various routes. Tsetse flies of the Glossina genus transmit many key species, but other biting flies can transmit T. evansi and T. vivax. In the case of T. equiperdum, the disease is transmitted venereally in horses. The life cycle of trypanosomes is, in general, complex and depends on the species. In the best-studied T. brucei group, two key forms are generally considered in the mammalian hosts (proliferating long slender forms and non-proliferating short stumpy forms, pre-adapted for transmission to the tsetse fly; see Figure 1), and multiple forms within the vector (including the midgut proliferative procyclic trypomastigote form and the metacyclic trypomastigote form found in salivary glands pre-adapted for transfer back to mammals) [1]. During infection, trypanosomes enter the host's bloodstream or lymphatic system during a blood meal of the tsetse fly. Later, trypanosomes also enter the central nervous system, where they induce neurological disorders and dysregulate the sleep-wake cycle of their host [2]. The exact route from blood to the brain is still not completely elucidated: the current model, in which trypanosomes cross directly from the blood to the brain via the brain via the blood-brain barrier, has recently been challenged. In the proposed model, trypanosomes cross from the blood into the cerebrospinal fluid (CSF) and possibly via the Virchow-Robin space into the brain, circumventing the blood-brain barrier entirely [2]. In a recent review, Mogk et al. discuss both models and support the latter one, hypothesizing the existence of a chronic infection stage in the meninges [2]. Recently, it has been shown that trypanosomes can be found in skin [3,4] and fatty tissues [5,6]; indeed, evidence indicates these parasites can reside and proliferate in a wide range of organs including the heart (which may account for significant morbidities and mortalities [7,8]) as well as the genital organs [9]. In spite of the flagellum offering little motility in the bloodstream, where the force of blood flow exceeds that produced by the flagellum, it is likely to play a crucial role in motility within other environments in which the trypanosomes find themselves, for example, in other organs where it does generate force required for motility. It also plays a key role in evading the immune system [10]. Employing optical tweezers, Stellamanns et al. measured the force generated by the trypanosome flagellar motor to be about 1 pN, while the power output amassed to 5.9 × 10 W, about nine times more than needed for its propulsion alone in a static fluid environment [11]. The excess energy of each stroke of the flagellum (about 4.0 × 10 J) generates a hydrodynamic drag force, which draws any surface-bound antibodies towards the flagellar pocket, a region through which the flagellum leaves the cell and where the sub-pellicular microtubule array is absent, creating a specialised space for endocytosis and exocytosis. Antibodies swept to the flagellar pocket are endocytosed and subsequently digested within the phagolysosomal system [10]. Together with their ability to express, in semi-sequential fashion, a single variant surface glycoprotein gene, from a repertoire of many hundreds [12], the parasites cans sustain infections for many months or even years. Diagnosis of trypanosomiasis still depends upon identification of trypanosomes in blood, a medium in which there may be up to 10 9 trypanosomes per mL, compared with only 10 7 per mL in culture medium [13,14]. In order to obtain high numbers of trypanosomes for visual identification or In spite of the flagellum offering little motility in the bloodstream, where the force of blood flow exceeds that produced by the flagellum, it is likely to play a crucial role in motility within other environments in which the trypanosomes find themselves, for example, in other organs where it does generate force required for motility. It also plays a key role in evading the immune system [10]. Employing optical tweezers, Stellamanns et al. measured the force generated by the trypanosome flagellar motor to be about 1 pN, while the power output amassed to 5.9 × 10 −16 W, about nine times more than needed for its propulsion alone in a static fluid environment [11]. The excess energy of each stroke of the flagellum (about 4.0 × 10 −17 J) generates a hydrodynamic drag force, which draws any surface-bound antibodies towards the flagellar pocket, a region through which the flagellum leaves the cell and where the sub-pellicular microtubule array is absent, creating a specialised space for endocytosis and exocytosis. Antibodies swept to the flagellar pocket are endocytosed and subsequently digested within the phagolysosomal system [10]. Together with their ability to express, in semi-sequential fashion, a single variant surface glycoprotein gene, from a repertoire of many hundreds [12], the parasites cans sustain infections for many months or even years. Diagnosis of trypanosomiasis still depends upon identification of trypanosomes in blood, a medium in which there may be up to 10 9 trypanosomes per mL, compared with only 10 7 per mL in culture medium [13,14]. In order to obtain high numbers of trypanosomes for visual identification or for molecular and biochemical analysis, in blood, a process of enrichment may be required. Physicochemical properties have long been exploited to enrich for trypanosomes from whole blood. For instance, the microhematocrit centrifugation technique (mHCT) is based on the difference in density between trypanosomes and red blood cells. After centrifugation of whole blood at high speed in anticoagulant-coated capillary tubes, trypanosomes can be collected in the buffy coat, that is, the leukocyte-containing layer that sits at the interface between the erythrocyte pellet and plasma [15]. Furthermore, even purer populations of trypanosomes can be obtained using the mini-anion-exchange centrifugation technique (mAECT). This method is based on the fact that the exposed surface charge of these parasites is neutral due to their being sheathed by the contiguous variant antigen coat that protects their surface from immunoeffector molecules. Blood cells, by contrast, are negatively charged in neutrally buffered solutions due to the exposed phospholipid bilayer. Anion-exchange substrates, such as a diethylaminoethyl cellulose (DEAE-C) matrix, therefore bind red blood cells and leukocytes while trypanosomes remain unbound and pass freely through anion-exchange columns [16,17]. The mAECT was adapted to be performed on the buffy coat, after centrifugation of 5 mL of whole blood (mAECT-BC), which was shown to significantly improve the sensitivity of this diagnostic test [18]. A recent comparative study reported that mAECT-BC is the parasitological technique for blood examination that offers the best performance in the context of gambiense human African trypanosomiasis (HAT) diagnosis [19]. In this review, we discuss recent advances in the use of microfluidic-based approaches to separate trypanosomes form blood using a range of different physicochemical parameters. Separation by Dielectrophoresis Trypanosomes carry no net surface electric charge-which, as stated above, underpins the ability to separate them from blood cells using anion-exchange chromatography [20]. However, in a non-uniform electric field, a dipole moment can be induced within the trypanosome that can result in a dielectrophoretic force (DEP)-resulting in movement caused by the interaction of the induced dipole moment with an electric-field gradient. Menachery et al. employed this DEP technique to enrich trypanosomes in murine blood in a four-armed spiral electrode array (see Figure 2). By applying a quadrature-phase voltage of 2 V at 140 kHz inside this spiral electrode array, they generated a traveling-wave electric field. This field separated trypanosomes from red blood cells due to their induced dipole moment, shape and size. Affected cells underwent levitation and translational motion. Red blood cells (RBCs) were pushed upwards and outwards, while trypanosomes were pulled downwards and towards the centre of the device. A key aspect of the DEP separation relates to the lateral movement of the cells depending not only on the cells' shapes and sizes, but also on their internal polarizability, regardless of surface charge, which distinguishes the approach from existing methods used to enrich trypanosomes (e.g., centrifugation and mini-anion-exchange chromatography). In order to optimize the separation of trypanosomes from the blood-sample matrix, it was necessary to analyze the cross-over frequencies of both cell types. The cross-over frequency is characteristic for each cell and describes the AC-frequency at which viable cells experience the reversal of DEP force direction (i.e., from force away from high electric-field density to force attracting to high electric-field density). In this case, the cross-over frequency for trypanosomes was below 140 kHz, and for RBCs above 140 kHz, thus allowing for effective separation using DEP. This difference in cross-over frequency is due primarily to differences in cell shape and size [20]. After 10 min of separation, the trypanosomes moved under the influence of the travelling wave electric field to the centre of the spiral electrode array. The voltage had to be kept at a maximum of 2 V to avoid lysis of trypanosomes at the centre of the device [20]. The limit of detection for a single spiral electrode (2.9 mm diameter) device was determined to be 1.2 × 10 5 trypanosomes per mL in whole blood, which could be increased by about two orders of magnitude when using ten spiral arms with a diameter of 10 mm to process one mL of infected blood. Although this limit of sensitivity exceeds the threshold of many trypanosome infections (which can be as low as <10 parasites per mL of blood), the fact that the array was designed for a handheld device, powered by batteries, might render it useful for field work in endemic areas with scarce resources. Moreover, as the technology is refined and possibly integrated with orthogonal methods for parasite enrichment, the potential for this technique is exciting. Along with the tunability of the cross-over frequency, various pathogens, including the African trypanosomes of veterinary importance, Trypanosoma cruzi, Leishmania species, flukes or microfilarial worms could also be enriched for diagnosis with variations on the DEP theme [20]. Pathogens 2017, 6, x FOR PEER REVIEW 4 of 12 Pathogens 2017, 6, x; doi: FOR PEER REVIEW www.mdpi.com/journal/pathogens render it useful for field work in endemic areas with scarce resources. Moreover, as the technology is refined and possibly integrated with orthogonal methods for parasite enrichment, the potential for this technique is exciting. Along with the tunability of the cross-over frequency, various pathogens, including the African trypanosomes of veterinary importance, Trypanosoma cruzi, Leishmania species, flukes or microfilarial worms could also be enriched for diagnosis with variations on the DEP theme [20]. Separation by Deterministic Lateral Displacement (DLD) A second microfluidic approach that has been successfully applied to the separation and detection of trypanosomes employs deterministic lateral displacement (DLD), a technique that uses a structured array of obstacles inside a microfluidic channel to generate a patterned flow-field with which particles interact and, based on physical properties such as size and shape, become spatially separated. The basic principle of separation by DLD can be understood by considering the interaction of particles with solid obstacles in the path of a fluid flow. The centre of a particle in a laminar flow will follow a streamline unless influenced by a force perpendicular to the direction of flow; a steric interaction between particle and obstacle constitutes just such a force. In a simplified model, if a fluid streamline carrying a particle passes closer to the surface of an obstacle than the effective radius of the particle, the particle will be pushed into a neighbouring streamline. The larger the particle, the further it will be pushed. If another obstacle is placed downstream in the correct position, then all particles larger than a critical size, that were pushed far enough by the first obstacle, will again be pushed laterally with respect to the flow direction, while all particles smaller than the critical size will remain in their original streamlines. In a DLD device these small lateral displacements are repeated many times in an array of obstacles, leading to the lateral separation of particles based on their effective size at the end of the array (see Figure 3a). amass in the centre of the device, moving around in circles when exposed to AC (f) or being trapped between neighbouring electrodes of two-phasic DC. Reprinted from [20] Separation by Deterministic Lateral Displacement (DLD) A second microfluidic approach that has been successfully applied to the separation and detection of trypanosomes employs deterministic lateral displacement (DLD), a technique that uses a structured array of obstacles inside a microfluidic channel to generate a patterned flow-field with which particles interact and, based on physical properties such as size and shape, become spatially separated. The basic principle of separation by DLD can be understood by considering the interaction of particles with solid obstacles in the path of a fluid flow. The centre of a particle in a laminar flow will follow a streamline unless influenced by a force perpendicular to the direction of flow; a steric interaction between particle and obstacle constitutes just such a force. In a simplified model, if a fluid streamline carrying a particle passes closer to the surface of an obstacle than the effective radius of the particle, the particle will be pushed into a neighbouring streamline. The larger the particle, the further it will be pushed. If another obstacle is placed downstream in the correct position, then all particles larger than a critical size, that were pushed far enough by the first obstacle, will again be pushed laterally with respect to the flow direction, while all particles smaller than the critical size will remain in their original streamlines. In a DLD device these small lateral displacements are repeated many times in an array of obstacles, leading to the lateral separation of particles based on their effective size at the end of the array (see Figure 3a). The mechanism of separation by DLD was first shown by Huang et al. [21]. The strength of the technique lies in the fact that it is continuous, does not require the application of external fields other The mechanism of separation by DLD was first shown by Huang et al. [21]. The strength of the technique lies in the fact that it is continuous, does not require the application of external fields other than fluid flow, is label free and separates with excellent resolution (better than 1% size difference for polystyrene spheres in the 1 µm size range). In order to design devices for specific applications, a more nuanced model than that presented above is required. Inglis [22] and Davis [23] developed an understanding of the critical size based on the array parameters shown in Figure 3a. To first approximation, the critical size is proportional to the gap size and the square root of the row shift. Studying the behaviour of erythrocytes in DLD devices, Beech et al. [24] and Henry et al. [25] developed a more detailed understanding of how the size, shape and deformability of the particles themselves contribute to their effective size and how particles could be separated by these parameters, which are highly relevant from a biological perspective. It is the difference in size and primarily in shape that is used for the separation of trypanosomes from blood using DLD. Holm et al. [26] modified DLD devices in order to greatly improve the separation of trypanosomes from blood cells. As mentioned above, the primary requirement of a microfluidic tool to improve detection of trypanosomes is to remove (or reduce) the large background of RBCs. While, at first appearances, trypanosomes seem to be much larger than RBCs, in a standard device they are hard to separate, due to the increase in the throughput of the devices (where deep devices enable non-spherical particles to rotate such that it is their smallest dimension that defines their critical size). In shallower devices (typically shallower than the longest dimension of the particles), rotation is hindered and the effective size of the particles changes, as shown in Figure 3b. In this way, Beech et al. [24] and Holm et al. [26] showed that careful choice of device depth can maximize the difference in effective size between RBCs and trypanosomes, greatly improving separation. In the most recent iteration, Holm et al. [27] showed a device that could deal with blood with low concentrations of parasites, with as little dilution as possible, outputting a sample stream of plasma containing parasites and close to no blood cells. The device was designed with simplicity and ease-of-use as highest priorities. In order to remove the need for expensive and power-consuming pumps, the device has one inlet only, and flow is driven using only a disposable syringe. Because of this simple, one-inlet design, all functionality must be built into the device which is able to (1) remove leukocytes in order to avoid clogging in subsequent steps, (2) create cell-free plasma and (3) transfer parasites into the cell-free plasma. The functionality in each section of the device comes from a combination of the array-spacing parameters and the depth of the channel (height of the posts). Figure 3c and d show electron-microscopy images of sections of the multi-depth device from [27]. Through this multi-height design, red cells, white cells and trypanosomes could all be successfully separated from each other, as shown in Figure 4. Pathogens 2017, 6, 47 6 of 12 than fluid flow, is label free and separates with excellent resolution (better than 1% size difference for polystyrene spheres in the 1 µm size range). In order to design devices for specific applications, a more nuanced model than that presented above is required. Inglis [22] and Davis [23] developed an understanding of the critical size based on the array parameters shown in Figure 3a. To first approximation, the critical size is proportional to the gap size and the square root of the row shift. Studying the behaviour of erythrocytes in DLD devices, Beech et al. [24] and Henry et al. [25] developed a more detailed understanding of how the size, shape and deformability of the particles themselves contribute to their effective size and how particles could be separated by these parameters, which are highly relevant from a biological perspective. It is the difference in size and primarily in shape that is used for the separation of trypanosomes from blood using DLD. Holm et al. [26] modified DLD devices in order to greatly improve the separation of trypanosomes from blood cells. As mentioned above, the primary requirement of a microfluidic tool to improve detection of trypanosomes is to remove (or reduce) the large background of RBCs. While, at first appearances, trypanosomes seem to be much larger than RBCs, in a standard device they are hard to separate, due to the increase in the throughput of the devices (where deep devices enable non-spherical particles to rotate such that it is their smallest dimension that defines their critical size). In shallower devices (typically shallower than the longest dimension of the particles), rotation is hindered and the effective size of the particles changes, as shown in Figure 3b. In this way, Beech et al. [24] and Holm et al. [26] showed that careful choice of device depth can maximize the difference in effective size between RBCs and trypanosomes, greatly improving separation. In the most recent iteration, Holm et al. [27] showed a device that could deal with blood with low concentrations of parasites, with as little dilution as possible, outputting a sample stream of plasma containing parasites and close to no blood cells. The device was designed with simplicity and ease-of-use as highest priorities. In order to remove the need for expensive and power-consuming pumps, the device has one inlet only, and flow is driven using only a disposable syringe. Because of this simple, one-inlet design, all functionality must be built into the device which is able to (1) remove leukocytes in order to avoid clogging in subsequent steps, (2) create cell-free plasma and (3) transfer parasites into the cell-free plasma. The functionality in each section of the device comes from a combination of the array-spacing parameters and the depth of the channel (height of the posts). Figure 3c and d show electron-microscopy images of sections of the multi-depth device from [27]. Through this multi-height design, red cells, white cells and trypanosomes could all be successfully separated from each other, as shown in Figure 4. Separation of Trypanosomes Using Optical Tweezers and Drug Screening Optical tweezers have been used for many years to capture individual, microscopic particles through attractive forces yielded by a highly-focused laser beam. Captured particles, like solid silica spheres, hollow polymer spheres and single cells, can then be manipulated and moved as though held by remarkably fine tweezers. Optical tweezers have been used on trypanosomes for multiple purposes [11,[28][29][30][31][32]. For example, optical tweezers were employed to separate and distinguish trypanosomes from other cells [30,31], to analyse their chemotactic behaviour [28] and to measure the forces created by their flagellar motors [11]. Whilst it is possible to estimate the forces and energy consumption of freely swimming cells [33], optically confined cells can be studied at higher magnification in greater detail. It is worth noting that the shape of cells and the integrated structure of their flagella have a pronounced, yet often overlooked, impact on the motility of trypanosomes [34] and cells in general, and that motility in turn is vital to cell differentiation [35,36]. Since optically confined trypanosomes are limited in their motility but retain their full mobility, a very small field of view can be used to quantify the forces and power of their flagellar motors [11]. The two distinct motility modes (tumbling and persistent/directional) have been present both in freely swimming [34,37] and optically confined trypanosomes, where directional displacement in optical confinement resulted in a persistent rotation around the point of optical confinement [11]. In addition, optical tweezers have been used to select single trypanosomes to study drug impact at the single-cell level [29]. In these studies, trypanosomes were confined individually to microfluidic compartments (microchambers) under no-flow conditions to investigate their motility patterns and to quantify their motility by calculating mean squared displacement (MSD, see Figure 5). By comparing the MSD of motile trypanosomes to the MSD of paralyzed trypanosomes, it was possible to express the energy a trypanosome consumes for propulsion in multiples of the thermal energy required to cause Brownian motion of an immobile trypanosome [29] (see Figure 5c,d). These devices also offered a means to study the effects of drugs on trypanosomes (and other cells) using two different approaches, described in Sections 4.1 and 4.2 below: These devices also offered a means to study the effects of drugs on trypanosomes (and other cells) using two different approaches, described in Sections 4.1 and 4.2 below: Ramping In the ramping method, captured trypanosomes were sorted into larger-sized microchambers (100 × 100 µm 2 , see Figures 5b and 6). The main channel that feeds the chambers is then flushed with a solution containing a high concentration of the test drug. Drug distribution into the microchambers from this reservoir is governed by diffusion rate, which ensures predictable and reproducible exposure of drug concentrations over a wide range of concentrations within minutes. Flushing of the main channel with drug-free medium then reduces the drug concentration inside the microchambers by reverse diffusion, thus stopping drug exposure at any chosen time, enabling time-dependent effects of drug exposure to be analyzed. These include differentiation between trypanostatic effects (reversible loss of motility) and trypanolysis or other irreversible effects on viability. It may even be possible to introduce methods that simulate in-vivo pharmacokinetic and pharmacodynamic (PK/PD) profiling by altering drug concentration to which parasites are exposed over time, thus mimicking in-vivo exposure where drug doses vary in time following an initial dose. The capability to distinguish between these effects offers a clear advantage over classical micotitre plate-based screening approaches. Ramping In the ramping method, captured trypanosomes were sorted into larger-sized microchambers (100 × 100 µ m 2 , see Figure 5b and Figure 6). The main channel that feeds the chambers is then flushed with a solution containing a high concentration of the test drug. Drug distribution into the microchambers from this reservoir is governed by diffusion rate, which ensures predictable and reproducible exposure of drug concentrations over a wide range of concentrations within minutes. Flushing of the main channel with drug-free medium then reduces the drug concentration inside the microchambers by reverse diffusion, thus stopping drug exposure at any chosen time, enabling timedependent effects of drug exposure to be analyzed. These include differentiation between trypanostatic effects (reversible loss of motility) and trypanolysis or other irreversible effects on viability. It may even be possible to introduce methods that simulate in-vivo pharmacokinetic and pharmacodynamic (PK/PD) profiling by altering drug concentration to which parasites are exposed over time, thus mimicking in-vivo exposure where drug doses vary in time following an initial dose. The capability to distinguish between these effects offers a clear advantage over classical micotitre plate-based screening approaches. Figure 6. Ramping drug exposure: Cells are put into the microchambers without any drug present in the device. Then, a drug-loaded solution is pumped through the main channel at high velocity, to ensure a quasi-steady-state of maximum drug concentration in the main channel. Through the long and narrow connecting channels, drug diffuses slowly into the microchambers, ramping up the drug concentration inside the chambers in a diffusion-controlled manner. During the entire experiment, the motility of cells inside the microchambers is recorded. Constant Exposure The constant exposure approach offers a single-cell and observable derivative of the classical microtitre plate approaches. Microchambers loaded with cells are filled with drug solution of desired concentration. The size of the microchamber and of the connecting channel influence the time needed to establish the same drug concentration in the chamber as in the main channel (potentially as quick as a few seconds). Alternatively, cells and drug solutions can be already mixed outside the chambers after the entire device has been washed with drug solution and prior to separating trypanosomes into microchambers, where their motility can be recorded and analysed while being exposed to a constant drug level. The first approach makes possible viewing effects at the onset of drug exposure, while the second approach (premixing) ensures that all recorded data show the cells at a constant drug level, albeit after the lag time incurred by premixing outside of the microfluidic device [29]. . Ramping drug exposure: Cells are put into the microchambers without any drug present in the device. Then, a drug-loaded solution is pumped through the main channel at high velocity, to ensure a quasi-steady-state of maximum drug concentration in the main channel. Through the long and narrow connecting channels, drug diffuses slowly into the microchambers, ramping up the drug concentration inside the chambers in a diffusion-controlled manner. During the entire experiment, the motility of cells inside the microchambers is recorded. Constant Exposure The constant exposure approach offers a single-cell and observable derivative of the classical microtitre plate approaches. Microchambers loaded with cells are filled with drug solution of desired concentration. The size of the microchamber and of the connecting channel influence the time needed to establish the same drug concentration in the chamber as in the main channel (potentially as quick as a few seconds). Alternatively, cells and drug solutions can be already mixed outside the chambers after the entire device has been washed with drug solution and prior to separating trypanosomes into microchambers, where their motility can be recorded and analysed while being exposed to a constant drug level. The first approach makes possible viewing effects at the onset of drug exposure, while the second approach (premixing) ensures that all recorded data show the cells at a constant drug level, albeit after the lag time incurred by premixing outside of the microfluidic device [29]. Conclusions Successful application of biophysical techniques in separating trypanosomes from blood cells has been of fundamental importance in diagnosing the disease caused by these parasites and enabling the molecular and biochemical analysis of separated trypanosomes. Classically, buffy-coat centrifugation and anion-exchange chromatography have been combined to enable this. New methods involving novel microfluidics approaches that can exploit the biophysical characteristics of these cells have proliferated in recent years. Here we have outlined how dielectrophoresis, deterministic lateral displacement and optical tweezer-based methods have been applied to the separation and manipulation of trypanosomes. Another approach, for example, tuning surface acoustic waves (SAW), may also find application [38] given its ability to separate particles according to their shape, size and other characteristics, including deformability. SAW was applied to create single-drop centrifugation able to separate healthy erythrocytes from malaria-infected ones [38]. In 2015, loop-mediated isothermal amplification (LAMP) was reported as a proven technique for detecting trypanosome genetic material [39][40][41][42][43][44][45][46], and combining a microfluidic device design with LAMP detection of trypanosomes offers a potential improvement in diagnostic capability. Integrating several microfluidics approaches may further improve diagnostic capability with improved speed and scale of separation. DEP, for example, could also be coupled with the shape-dependent opto-electric cell lysis [47-49], or specific chemical lysis [50], wherein RBCs can be selectively destroyed while preserving trypanosomes, improving their detection. The scale at which instruments exploiting these techniques can be produced is amenable to production of handheld devices with readouts readily achieved within the processing power of modern smartphones. In addition to both the human infective and veterinary animal trypanosomes, other haemoparasites may be amenable to similar approaches (e.g., Trypanosoma cruzi, Leishmania species and microfilaria). Of all these techniques, DLD and optical tweezers could potentially also be used to separate different life-cycle stages of the same pathogen (e.g., short stumpy bloodstream forms from long slender blood stream forms of Trypanosoma brucei brucei), based on differences in their shapes and/or sizes. Moreover, intestinal parasites such as Giardia, Trichomonas and Entamoeba are endowed with similar types of biophysical characteristics, and would likely be separated from stools using these approaches, too.
2017-10-18T09:25:50.290Z
2017-10-05T00:00:00.000
{ "year": 2017, "sha1": "409ee3cc4562089a0ccf5b7905c6ee0a30b8099c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-0817/6/4/47/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "409ee3cc4562089a0ccf5b7905c6ee0a30b8099c", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
4906587
pes2o/s2orc
v3-fos-license
Antitumor effects of cyclin dependent kinase 9 inhibition in esophageal adenocarcinoma Role of cyclin dependent kinase 9(CDK9) as a potential target in esophageal adenocarcinoma (EAC) is unknown. We investigated CDK9 protein expression in EAC and Barrett's esophagus and role of CDK9 in oncogenic processes of EAC in vitro and in murine xenografts. The CDK9 expression was significantly higher in EAC as compared to Barrett's esophagus in patient samples. Stable shCDK9 in SKGT4 reduced proliferation by 37% at day 4, increased apoptosis at 48 hours and induced G1 cell cycle arrest at 48 hours (58.4% vs. 45.8%) compared to controls SKGT4 cells. SKGT4-shCDK9 cell-derived tumors were significantly smaller than control SKGT4-derived tumors in xenografts (72.89mm3 vs. 270mm3). Pharmaceutical inhibition of CDK9 by Flavopiridol (0.1μm for 48 hours) and CAN508 (20 and 40μm for 72 hours) induced significant reduction in proliferation and 2-fold increase in apoptosis in SKGT4, FLO1 and OE33 cells. In xenograft models, CAN508 (60 mg/kg/dayx10 days) and Flavopiridol (4mg/kg/dayx10 days) caused 50.8% and 63.1% reduction in xenograft tumors as compared to control on post-treatment day 21. Reduction of MCL-1 and phosphorylated RNA polymerase II was observed with transient shCDK9 in SKGT4 cells but not with stable shCDK9. CAN508 (20 and 40 μm) and Flavopiridol (0.1, 0.2 and 0.3 μm) for 4 hours showed reduction in MCL-1 mRNA (84% and 96%) and protein. Mcl-1 overexpression conferred resistance to Flavopiridol (0.2 μm or 0.4 μm for 48 hours) and CAN 508 (20 or 40μm for 72 hours). Chromatin immunoprecipitation demonstrated significant reduction of binding of transcriptional factor HIF-1α to MCL-1 promoter in FLO-1 cells by CDK9 inhibitors. INTRODUCTION Incidence of esophageal adenocarcinoma has increased rapidly in the USA with 16,910 estimated new cases and 15,690 estimated deaths in 2016 [1].Increase in incidence of adenocarcinoma is primarily responsible for the rise in esophageal cancer in western world [2].Majority of patients with esophageal adenocarcinoma present with loco-regional (stage II-III) disease.These patients are treated with preoperative chemoradiation followed www.impactjournals.com/oncotarget/Oncotarget, 2017, Vol. 8, (No.17), pp: 28696-28710 Research Paper by esophagogastrectomy.In spite of this aggressive therapy, 5-year survival rate is still low (20-30%).Five year survival rate for metastatic (stage IV) esophageal adenocarcinoma is less than 5%.Major limiting factor for successful implementation of targeted therapy in esophageal adenocarcinoma is low frequency of target specific biomarker alterations and intratumoral heterogeneity [3][4][5][6].Cyclin dependent kinases (CDKs) are the evolutionary conserved ubiquitous serine threonine kinases that play a multifactorial role in regulation of cell cycle and transcription [7,8].CDK9, a member of this family, acts predominantly as a transcription regulator by phosphorylating RNA polymerase II (Pol II) carboxyl terminal domain (CTD) and elongation of nascent transcripts [9,10].Flavopiridol, a CDK inhibitor with predominant CDK9 inhibitory effects is used in treatment of AIDS and has been tested for its efficacy in clinical trials in lymphoma and other solid tumors [11][12][13].CDK9 inhibitor II, CAN508 is an arylazopyrazole compound that inhibits CDK9 with 38-fold selectivity for CDK9/cyclin T over other CDK/cyclin complexes [14] and regulates CDK9 mediated c-MYC and androgen receptor transcription in breast and prostate cancers [15,16].More recently, CAN 508 has also demonstrated anti-angiogenesis effects through CDK9-dependent mechanism [17].Specificity of CAN 508 to CDK9 is partly attributed to the conformational plasticity of CDK9 [18].The efficacy of these CDK 9 inhibitors in esophageal adenocarcinoma has not been tested.In addition, relevance of CDK9 mediated MCL-1 regulation and mechanism by which CDK9 regulates MCL-1 is not known in esophageal adenocarcinoma, even though CDK9 mediated MCL-1 regulation is one of commonest mechanism by which CDK9 inhibitors have shown anti-tumorogenic effects in other tumors [19,20].In this study, we compared CDK9 protein expression in matched samples of Barrett's esophagus and invasive carcinoma from patients with esophageal adenocarcinoma and assessed in vitro and in vivo effects of genetic downregulation (shCDK9) and pharmaceutical inhibition of CDK9.We also studied mechanism of MCL-1 regulation by CDK9 inhibitors in esophageal adenocarcinoma. Cyclin dependent kinase 9 is overexpressed in esophageal adenocarcinoma and not in Barrett's esophagus All esophageal adenocarcinoma cell lines showed high level of CDK9 protein as compared to a normal esophageal epithelial cell line (Figure 1A).Strong and diffuse expression of CDK9 was observed in more than 90% of invasive adenocarcinoma cells in all tumor samples with minimal to absent staining of the stromal cells.In contrast, CDK9 expression was observed predominantly in the proliferative zone in the base of the crypt of Barrett's esophagus with minimal to absent staining of the surface epithelium (Figure 1B, 1C, 1D).Table 1 shows the quantitative assessment of CDK9 expression in invasive adenocarcinoma and different compartments of Barrett's esophagus.The CDK9 expression was significantly higher in invasive adenocarcinoma (Figure 1E) as compared to CDK9 expression in total (combination of all compartment) Barrett's esophagus and in each compartment of Barrett's esophagus. The CDK9 expression in lower half of Barrett's esophagus was significantly higher than the upper half. Genetic down-regulation of CDK9 decreases cell proliferation promotes apoptosis and G1 arrest in esophageal adenocarcinoma cells and is antitumorigenic in xenografts We generated stable SKGT4 cells with downregulated CDK9 expression by transducing lentivirus carrying shCDK9.The down regulation of CDK9 reduced the proliferation of SKGT4 cells by 31.2% at day 3 and 37% at day 4 compared to control cells (p value < 0.01, Figure 2A).ShCDK9 resulted in a significant increase in apoptotic cells (4.6% ± 0.3% vs. 3.6% ± 0.3%, p < 0.05, Figure 2B) and cells in G1 phase at 48 hours (58.4% ± 0.97% vs. 45.8% ± 0.39%, p< 0.01, Figure 2C) compared to the controls SKGT4 cells.In xenograft experiments with genetic downregulation (shCDK9) of SKGT4 and control SKGT4 cells, eleven of 20 mice developed at least one tumor with either parenteral SKGT4 or with shCDK9 SKGT4 cells.There were 16 tumors in 11 mice with parenteral SKGT4 cells (6 mice with 2 tumors, 4 mice with 1 tumor and 1 mouse with no tumor).There were 8 tumors in 11 mice with shCDK9 SKGT4 (1 mouse with 2 tumors and 6 mice with 1 tumor).Four mice with parenteral SKGT4 tumors did not develop tumor with shCDK9 and 1 mouse that developed tumor with shCDK9 SKGT4 did not develop tumor with parenteral SKGT4.Volume of SKGT4-shCDK9 cell-derived tumors was significantly smaller (Figure 2D and 2E) than those from control SKGT4 cells (72.89 ± 12.88 mm 3 versus 270 ± 64.07 mm 3 , p< 0.01).None of the mice demonstrated signs of morbidity like rapid breathing rate, slow shallow labored breathing, and weight loss, ruffled fur, hunched posture, anorexia and moribund signs like impaired ambulation, muscular atrophy, signs of lethargy, bleeding or CNS disturbances and inability to remain uptight when monitored daily by either staff of department of veterinary medicine or personnel performing experiments.Western blot analysis showed reduction of c-MYC and not of phosphorylated RNA Poll II and MCL-1 with appropriate internal (GAPDH) control and marked reduction of CDK9 in stable shCDK9 SKGT4 cells (Figure 2F).In contrast transient downregulation of CDK9 demonstrated reduction of MCL-1, c-MYC and phosphorylated Poll II (Figure 2G). Pharmaceutical inhibition of CDK9 is cytotoxic in vitro and has antitumor effects in esophageal adenocarcinoma xenografts CAN508 and Flavopiridol significantly reduced cell proliferation in a dose dependent manner in all three esophageal adenocarcinoma cell lines in vitro (Figure 3A).Significant inhibition of SKGT4 cell proliferation was detected with 72 hours treatment with a dose of 40 µm of CAN508 while a dose of 20 µm of CAN508 was sufficient to inhibit the proliferation of OE33 and FLO-1 cells (p <0.05).A dose of 0.1 µm for 48 hours was sufficientfor Flavopiridol to inhibit the proliferation of all three esophageal adenocarcinoma cell lines.CAN508 (40µM for 72 hours) also increased apoptosis by 2 fold in all three esophageal adenocarcinoma cells compared to untreated controls.Flavopiridol (0.4 µm for 48 hours) increased apoptosis in FLO-1 and SKGT4 cells as compared to control (Figure 3B).CAN508 treatment for 72 hours led to the accumulation of cells in G1 phase (Figure 3C) and Flavopiridol treatment for 48 hours led to accumulation of cells in G2 phase.In xenograft models, both CAN508 and Flavopiridol caused reduction of tumor growth starting from post-treatment day three with 50.83% reduction with CAN508 (Figure 4A, p<0.01 compared to control) and 63.1% reduction with Flavopiridol (Figure 4B, p<0.001 compared to control) on post-treatment day 21.There were no significant signs of toxicity throughout the treatment period as monitored by bodyweights (Figure 4A and 4B) and other signs of toxicity including rapid breathing rate, slow shallow labored breathing, abdominal distension, ruffled fur, hunched posture, anorexia and moribund signs like impaired ambulation, muscular atrophy, signs of lethargy, bleeding or CNS disturbances and inability to remain uptight when monitored daily by either staff of department of veterinary medicine or personnel performing experiments. Pharmaceutical inhibition of CDK9 transcriptionally regulates MCL-1 mRNA and does not modify proteosomal degradation of MCL-1 in esophageal adenocarcinoma in vitro CAN508 (20 and 40 µm) treatment and Flavopiridol treatment (0.1, 0.2 and 0.3 µm) for 4 hours caused dramatic reduction of phosphorylation at ser2 of Pol II carboxy terminal domain (Poll II CTD) and MCL-1 protein (Figure 5A).Reduction of phosphorylation at ser2 of Poll II CTD and MCL-1 by CAN508 and Flavopiridol was in dose dependent manner and lasted for at least 16 hours (data not shown).Ubiquitin dependent proteosomal degradation is a wellestablished mechanism of MCL-1 reduction in cells [21,22].MG-132 is an inhibitor of ubiquitin dependent proteosomal degradation.In our experiments, MCL-1 level was significantly higher in cells treated with CDK9 inhibitor and MG132 as compared to cells treated with only CDK9 inhibitor (p<0.05).However, the MCL-1 level is significantly lower in cells treated with MG132 and CDK9 inhibitor as compared to MG132 alone (p<0.05).This indicates that proteosomal degradation inhibition partly rescues the MCL-1 and rest of reduction in MCL-1 protein is likely to be secondary to reduced transcription (RNA level).(Figure 5B).In contrast, MCL-1 mRNA expression was reduced by 72.5% ± 6.8% in OE33 cells, by 58% ± 9.1% in FLO-1 cells and by 84.3% ± 5.5% in SKGT4 cells after treatment with CAN508 (40 µm) for 4 hours as compared to the expression in untreated cells.(p value < 0.05, Figure 5C, left panel).Treatment with Flavopiridol (0.4 µM, 4 hours) resulted in reduction of MCL-1 mRNA expression by 96.8% ± 1% in OE33 cells, 87.1% ± 0.7% in FLO-1 cells and 98.5% ± 0.5% in SKT4 cells (Figure 5C, right panel).These findings indicate that Flavopiridol and CAN508 regulates MCL-1 predominantly by inhibiting MCL-1 transcription and less so by enhancing proteosomal degradation.CAN508 treatment with 40 µm for 4 hours showed reduction in c-MYC in OE 33 cells, while no reduction in c-MYC was observed after treatment with CAN 508 in Flo-1 and SKGT4 cells and with lower dose (20 µm) OE 33 cells.Treatment with Flavopiridol (0.2 µm and 0.4 µm) for 4 hours decreased c-MYC in OE33 and SKGT4.In Flo-1 cells only higher dose (0.4 µm) of Flavopiridol decreased c-MYC (Figure 5D).Flavopiridol and CAN508 are competitive inhibitors of CDK9/positive transcription elongation factor (p-TEFB) and do not affect CDK9 levels in the tumor cells as shown in Figure 5D.* p value for for intensity 0, 2, and 3 compared between surface and upper 1/2.p value for staining intensity 1 between surface and upper half of cryptis 0.39 ** p value for for intensity 0,1, 2, and 3 compared between upper 1/2 and lower 1/2 of the crypt of BE *** p value for intensity o,1,2,3 between invasive adenocarcinoma and total BE. Over expression of MCL-1 reduces the cytopathic effects of CDK9 inhibitors Cells with MCL-1 overexpression had very high levels of MCL-1 compared to MCL-1 expression in control cells (Supplementary Figure 1A).As expected, MCL-1 overexpression led to a reduction of CDK9 inhibitors cytotoxicity in esophageal adenocarcinoma cells.All three cell lines with MCL-1 overexpression were significantly more resistant to 40 µm of CAN508 than the control cells (p< 0.05, Supplementary Figure 1B).However, only FLO-1 and SKGT4 cells with MCL-1 overexpression were significantly more resistant to Flavopiridol than control cells (Supplementary Figure 1C). DISCUSSION In this study, we have demonstrated preclinical in vitro and in vivo efficacy of CDK9 inhibition in esophageal adenocarcinoma, by genetic downregulation and pharmaceutical inhibition.shCDK9 and pharmaceutical inhibition by an established clinically used CDK9 inhibitor (Flavopiridol) and highly specific CDK 9 inhibitor (CAN508) demonstrated reduction in cell proliferation, increase in apoptosis and G1 or G2 cell cycle arrest.Similarity in cytotoxic effects between genetic downregulation and both CDK9 inhibitors support CDK9 as an important therapeutic target in esophageal adenocarcinoma.Flavopiridol and CAN508 had similar degree of dose dependent anti-apoptotic effects in 2 esophageal adenocarcinoma cell lines and dose dependent anti-proliferative effects in all 3 esophageal adenocarcinoma cells lines.Flavopiridol demonstrated higher anti-proliferative effects in SKGT4 cells as compared to CAN 508.In contrast CAN 508 demonstrated higher anti-apoptotic effects in OE33 as compared to Flavopiridol.SKGT4 cells are derived from low T stage (T2) well differentiated esophageal adenocarcinoma while FLO-1 cells are derived from stage III/IV esophageal adenocarcinoma and OE33 cells are derived from stage III poorly differentiated adenocarcinoma.These differences in histology and stage of the disease may explain different biology of the tumor cells which is reflected in sensitivity to CDK9 inhibitors.The difference in cell cycle arrest (G1 vs. G2) with Flavopiridol and CAN 508 is possibly due to effects of Flavopiridol on CDKS other than CDK9 and less specificity towards CDK9 as compared CAN 508 as shCDK9 also demonstrated G1 arrest.In xenograft experiments, difference in rate of xenograft volume change between Flavopiridol and CAN508 is possibly due to different strains of mice as number and type of cells injected were similar.In both experiments, xenograft tumor growth was significantly inhibited by the CDK9 inhibitors compared to control from day 3 (Flavopiridol) and day 6 (CAN508) to the end of experiment indicating efficacy of both inhibitors in controlling tumor growth in esophageal adenocarcinoma xenografts. In the present study, both CDK9 inhibitors (doses lower than calculated IC 50 ) and transient shCDK9 downregulated p-Pol II and MCL-1 while stable shCDK9 did not down regulate p-Pol II and MCL-1.This is likely due to the phosphorylation of Pol II and activation of MCL-1 by alternate pathways in stable shCDK9 cells because of the irreversible effects of stable shCDK9 as compared to reversible effects of transient shCDK9 and CDK9 inhibitors.The difference in effects of type (stable vs. transient) of genetic downregulation of CDK9 on critical downstream targets of CDK9 (p-Pol II and MCL-1) is new and additional work with different types of genetic downregulation in esophageal adenocarcinoma and other solid tumor cell lines will provide more insights in understanding mechanism of action of CDK9 and identifying appropriate target for the CDK9 inhibition in esophageal adenocarcinoma.Transient downregulation (as compared to stable downregulation) is functionally more similar to treatment with CDK9 inhibitors due to temporary effects of CDK9 inhibition.Similarities in effects of transient shCDK9 and CDK9 inhibitors on p-Poll II (surrogate of CDK9 activity) and MCL-1 support our conclusion that cytotoxic effects of CAN 508 and Flavopiridol are at least partly mediated by CDK9 and MCL-1 is a potential target of these inhibitors in esophageal adenocarcinoma cells.This conclusion is further supported by reduction of efficacy of Flavopiridol and CAN 508 in vitro with MCL-1 upregulation/overexpression in at least 2 esophageal adenocarcinoma cells. For both CDK9 inhibitors, MCL-1 downregulation was dose dependent and associated with decrease in phosphorylation of Poll II at ser 2. These along with no increase in ubiquitin mediated proteosomal degradation by pharmaceutical CDK9 inhibitors, suggest that CDK9 transcriptionally regulates MCL-1 in esophageal adenocarcinoma.A previous study has shown that HIF-1α expression is significantly higher in esophageal adenocarcinoma as compared to Barrett's esophagus [23], similar to what is observed with CDK 9 in our study.However, role of HIF-1α in critical cellular processes of esophageal adenocarcinoma is unknown.In this study, we for the first time demonstrate that MCL-1 regulation by CDK9 inhibitor is mediated by downregulation of binding of transcription factor HIF-1α to MCL-1 promoter.Interplay of MCL-1 and HIF-1α have shown to be variable in different tissues and cancers.In hepatoma cells, HIF-1α induced MCL-1 upregulation is anti-apoptotic [24] while in small cell lung cancer cell lines, hypoxia induced MCL-1 downregulation is independent of HIF-1α [25].It will be interesting to study whether the CDK9 inhibitor induced HIF-1α mediated MCL-1 downregulation is dependent on the hypoxic injury and what are the end results of HIF-1α mediated MCL-1 downregulation on other processes critical to malignant phenotype like angiogenesis.Prior studies have shown that CDK9 mediated transcription regulation of MYC is important in therapy resistance in breast cancer and disease maintenance in hepatocellular carcinoma [15,26,27].Our findings show that MYC is downregulated with shCDK9 in esophageal adenocarcinoma while MYC downregulation by the CDK9 inhibitors is not consistent across three esophageal adenocarcinoma cell lines.As focus of this study was to assess the efficacy and identify a target of pharmaceutical inhibitors of CDK9 in esophageal adenocarcinoma and MYC did not show consistent alterations after CDK9 inhibitor treatment, we chose to study MCL-1 instead of MYC as the CDK9 target in this study. Progress in targeted therapy in esophageal adenocarcinoma has been slow primarily due to failure of conventional markers in other gastrointestinal adenocarcinoma (KRAS, EGFR, PTEN, PIK3CA, and c-MET) [28] to identify patients with esophageal adenocarcinoma who would respond to a targeted therapy.In addition, lower frequency of biomarkers like HER2-Neu in esophageal adenocarcinoma has provided limited benefit of Trastuzumab in a small group of patients [29].Cyclin dependent kinase 9 inhibition is potentially a good therapeutic strategy as CDK9 is diffusely overexpressed in esophageal adenocarcinoma cells as compared to Barrett's esophagus.The CDK9 overexpression is related to the proliferative nature of the cells as higher expression was observed in adenocarcinoma cells and lower half of the crypts (proliferative compartment rich in stem cells) of Barrett's esophagus as compared to upper half of the crypts of Barrett's esophagus.The higher expression of CDK9 in actively proliferating cells also supports use of CDK9 inhibitors in combination with chemotherapy agents as both the agents are likely to be effective in rapidly dividing esophageal adenocarcinoma cells.The CDK9 inhibitors used in this study were selected based on their efficacy in other tumors and specificity to CDK9, to study the relevance of CDK9 inhibition in esophageal adenocarcinoma.Our findings support further exploration of newer CDK9 inhibitors which are more specific and likely to cause lower toxicity with therapeutic doses in patients with esophageal adenocarcinoma compared to the previously used CDK9 inhibitors.In addition, role of MCL-1 in selecting patients whose tumor is more likely to respond to these CDK9 inhibitors with or without chemotherapy and radiation need to be studied in esophageal adenocarcinoma. In summary, findings in this study demonstrate significant in vitro and in vivo efficacy of CDK9 inhibition in esophageal adenocarcinoma and identify that CDK9 regulates MCL-1 transcription by inhibiting binding of HIF-1alpha to MCL-1 promoter.With limited options of targeted therapy in esophageal adenocarcinoma, exploration of CDK9 inhibitors as therapeutic agents in patients with esophageal adenocarcinoma is warranted. Histopathology review and immunohistochemical analysis Patient population comprised of 9 men and 1 woman with an average age of 73 years (range 54-90 years).All patients underwent standard pretreatment staging which included esophagogastroduodenoscopy with endoscopic ultrasound (EUS) and biopsy of tumor and Barrett's esophagus and CT or PET-CT of chest and abdomen.All patients had clinical stage I disease with tumor limited to mucosa or submucosa with negative lymph nodes by EUS and fine needle aspiration.Long segment (length of segment more than 3 cm long) Barrett's esophagus was found in 7 patients and short segment (length of segment less than 3 cm) Barrett's esophagus was found in 3 patients.Three patients had biopsy proven high grade dysplasia away from tumor, 5 patients had low grade dysplasia away from tumor and 2 patients had non dysplastic Barrett's esophagus away from tumor.Seven patients underwent endoscopic mucosal resection and three patients underwent esophagogastrectomy without preoperative chemoradiotherapy.One of ten patients died of metastatic disease, one patient died because of sepsis unrelated to esophageal adenocarcinoma and 8 patients were alive at the time of last follow up. Formalin fixed paraffin embedded tissue sections from matched Barrett's esophagus and adenocarcinoma from surgically or endoscopically resected specimens were stained for Hematoxylin and Eosin (H&E) and CDK9 protein expression by immunohistochemistry. H&E slides were reviewed by an expert gastrointestinal pathologist (DMM) and standard histopathology diagnostic criteria were applied to confirm diagnoses of invasive adenocarcinoma and Barrett's esophagus with and without dysplasia [30].Invasive adenocarcinoma was diagnosed when stromal invasion beyond basement membrane was identified in the H & E stained sections.Barrett's esophagus was diagnosed as presence of intestinal metaplasia (presence of goblet cell).Presence and grading of dysplasia in Barrett's esophagus were assessed based on absence of epithelial maturation on the surface, architectural complexity, increase in nuclear/cytoplasmic ratio and presence of surface mitosis in the metaplastic cells [31].For immunohistochemistry, tissue sections were deparaffinized and antigen retrieval was performed with citrate buffer for 15 min at 100 0 C. Anti-CDK9 antibody (rabbit monoclonal antibody, Cell Signaling Technology) was then added for 16 hours at 4 0 C. Endogenous peroxidase activity was blocked by 3% hydrogen peroxidase.The immunoreative protein was visualized by Ventana DAB detection system (Dako, Carpenteria, CA).CDK9 nuclear staining intensity was assessed in surface, upper half and lower half (proliferative compartment) of the crypts of nondysplastic Barrett's esophagus (intestinal metaplasia) and in the invasive adenocarcinoma cells.Percentage of cells with nuclear intensity 0, 1, 2 or 3 were manually counted in each compartment of Barrett's esophagus and invasive adenocarcinoma cells in 10 fields at 200X magnification by a gastrointestinal pathologist (Supplementary Figure 2). Cell lines and cell culture Esophageal adenocarcinoma cells OE33, FLO-1 and SKGT4 were purchased from Sigma-Aldrich (St. Louis, MO).ESO51 and KYAE-1 cells were obtained from Culture Collections (Public Health England, UK).OE33, ESO51 cells were maintained in a RPMI medium containing 2 mM of L-Glutamine, 10% of fetal bovine serum (FBS), 100 units/ml penicillin and 100 μg/ml of streptomycin.KYAE-1 cells were maintained in a medium of RPMI + Hams F12 (1:1) and FLO-1 and SKGT4 cells were maintained in a DMEM medium containing 10% of FBS, 100 units/ml penicillin and 100 μg/ml of streptomycin.293 FT cells were obtained from Invitrogen (Carlsbad, CA) and maintained in a DMEM medium supplemented with 10 % FBS and 500µg/ ml G418.Normal esophageal epithelial cells, HET-1A were provided by Dr. Xu (MD Anderson Cancer Center, Houston, TX) and maintained in a KSF medium (Lonza Walkersville Inc., Walkersville, MD).Cells were maintained in a 5% CO 2 atmosphere at 37 0 C and passaged at 80% confluence using 1 mM EDTA-0.025% trypsin for 3 to 5 minutes.All cell lines were authenticated by cell line validation core facility of UT M.D. Anderson Cancer Center. Western blot Proteins from cell lysates were separated on 8% or 10% SDS-PAGE gel.The separated proteins were electrophoretically transferred to PVDF transfer membranes (GE Healthcare Life Sciences, Pittsburgh, PA) and incubated with a blocking solution, 5% dry milk in TBST [25 mmol/L Tris-HCl (pH 7.6), 200 mmol/L NaCl, and 0.1% Tween 20], for 1 h at room temperature.Target protein levels were measured by immunoblotting with antibodies of CDK9 (rabbit monoclonal, Cell Signaling Technology), MCL-1, c-Myc (Santa Cruz Biotechnology, Inc., Santa Cruz, CA), RNA pol II (p Ser2), RNA pol II (Novus biological, Littleton, CO) and HIF-1alpha (ThermoFisher Scientific, Waltham, MA).Blots were washed thrice for 15 min each time at room temperature with TBST and then incubated for 1 h with secondary antimouse or anti-rabbit peroxidase-linked antibodies (GE Healthcare Life Sciences, Pittsburgh, PA) in a blocking solution.Blots were then washed (3 x 15 min).Bands were visualized by enhanced chemiluminescence (GE Healthcare Life Sciences, Pittsburgh, PA).All experiments were performed in triplicates. Quantitative real-time PCR Total RNA was isolated from cells using the RNeasy mini kit according to the manufacturer's protocol (Qiagen, Valencia, CA).0.5 μg of RNA was then reversetranscribed to cDNA using SuperScript II Reverse Transcriptase (Invitrogen, Carlsbad, CA).Quantitative real-time PCR was performed using SYBER Green mix on a BioRed instrument.PCR primers were designed using primer3 program according to DNA sequence of MCL-1.The quantitative real-time PCR for each treatment was performed in triplicate.Ct value was obtained by BioRed iCycler data analysis software.The Ct value of GAPDH was subtracted from that of the interested gene to obtain a ΔCt value.The ΔCt value of controls was subtracted from the ΔCt value of each sample to obtain a ΔΔCt value.The gene expression level relative to the controls was expressed as 2 −ΔΔCt .All experiments were performed in triplicates. CDK9-shRNA esophageal adenocarcinoma cells and pharmaceutical inhibitors of CDK9 To produce lentivirus that expresses shCDK9, we cotransfected pLKO-shCDK9 (Sigma-Aldrich, St. Louis, MO), or control vectors with their packaging and envelope plasmids into 293FT cells using lipofectamine 2000 reagent according to the manufacturer's instructions (Invitrogen, Carlsbad, CA).Forty eight hours later, viral supernatant was collected after centrifugation at 3000 rpm for 15min.For transduction with lentivirus, cells were infected with 2x diluted virus media containing 6 μg/ml of polybrene for 16 hours.Cells with stably down-regulated CDK9 expression by shCDK9 were selected by incubation in a medium containing purimycin for at least 2 weeks.The expression of target proteins was confirmed by western blot.For transient down regulation of CDK9, cells were harvested for western blot after 72 hours infection with lenti-shCDK9. Pharmaceutical inhibitors of CDK9, CAN508 and Flavopiridol were purchased from EMD Millipore (Temecula, CA) and Cayman Chemical (Ann Arbor, MI), respectively.Both were dissolved in DMSO before their use in vitro or in xenografts. Generation of stable MCL-1 overexpression in esophageal adenocarcinoma cells Human MCL-1 cDNA was released from pCMV-SPORT6 (OriGene technologies Inc., Rockville, MD) with EcoRI and subcloned into the lentiviral vector pCDH-VMV-MCS-EF1-Puro between EcoRI to create phMCL-1.Identity and orientation of this construct were confirmed by DNA sequencing (DNA core facility, MD Anderson Cancer center, Houston, TX).To produce lentivirus that overexpresses MCL-1, we cotransfected phMCL-1 or control vectors with their packaging and envelope plasmids into 293FT cells using lipofectamine 2000 reagent according to the manufacturer's instructions (Invitrogen, Carlsbad, CA).Forty eight hours later, viral supernatant was collected after centrifuged at 3000 rpm for 15min.For transduction with lentivirus, cells were infected with 2x diluted virus media containing 6 μg/ml of polybrene for 16 hours.Cells with stable expression of MCL-1 were selected by incubation in a medium containing purimycin for at least 2 weeks.Expression of target proteins was confirmed by western blot. Cell apoptosis assay and cell cycle analysis For apoptosis assay, esophageal adenocarcinoma cellsafter shCDK9 ortreatment with CAN508 for 72 hours or Flavopiridol for 48 hours were harvested, washed with cold PBS, resuspended in a solution containing 5 µl of recombinant Annexin V-FITC (BD Biosciences, San Jose, CA) and 5 µg/ml of propidium iodide, and incubated for 15 minutes.For cell cycle, cells treated with CAN508 for 72 hours or Flavopiridol for 48 hours were fixed and stained with propidium iodide.Apoptotic cells and cell cycle were then analyzed by flow cytometer at MD Anderson Cancer Center DNA analysis core facility (Houston, TX).Cells stained with propidium iodide alone were considered necrotic, whereas cells stained with Annexin V (Annexin V+ cells) with and without propidium iodide were considered apoptotic. Chromatin immunoprecipitation assay (ChIP) We performed ChIP assay to confirm that the reduction of MCL-1 expression by CDK9 inhibitors is at the transcriptional level and to determine whether CDK9 inhibitors affect the binding of transcriptional factors such as HIF-1α to MCL-1 promoter.HIF-1α has been reported to play important roles in regulation of MCL-1 expression [32,33]; although HIF-1α mediated regulation of MCL-1 has not been studied in esophageal adenocarcinoma.ChIP assay was performed using Pierce TM agarose ChIP kit (Thermo Scientific, Rockford, IL).Briefly, FLO-1 cells were treated with 0.4 µm of Flavopiridol or 40 µm CAN508 II for 4 hours and fixed by 1% formaldehyde solution to cross-link DNA and protein.The chromatin was then digested with Micrococcal Nuclease to obtain chromatin fragments ranging in size from 200 to 1000 base pairs.10% of chromatin fragments were used as input DNA.The immunoprecipitation was performed using either 1 µg of anti-HIF-1α antibody or lgG control.The immunoprecipitated DNA was then quantitated using real-time PCR.The specific primers for MCL-1 promoter (Forward: 5'-AGGTCACTTGAGGCCATGAG-3', Reverse: 5'-CACGTTCAGACGATTCGGTA-3') were used as previously reported [32].These primers cover -1051 to -901bp region of MCL-1 promoter.The enrichment of targeted genomic regions was assessed relative to the input DNA.ChIP assay was run twice with both inhibitors and Q-PCR in all 4 ChIP experiments were run in triplicates. Esophageal adenocarcinoma xenografts studies All experiments involving mice were conducted according to animalexperimental protocol that has been approved by the Institutional Animal Care and Use Committee (IACUC) at The University of Texas MD Anderson Cancer Center.To assess the in vivo effects of genetic downregulation of CDK9 (shCDK9), 2x10 6 of SKGT4-shCDK9 and control SKGT4 cells were injected subcutaneously in abdominal wall of 4 weeks-old female nude mice.Total 20 mice were injected with 4 injections per mice.The injection sites included right upper and left lower quadrants for parenteral SKGT4 cells and left upper and right lower quadrants for shCDK9 SKGT4 cells.Distance between injection sites was at least 3 cm.The volume of xenograft tumor was measured every three days until 7 weeks using a digital caliper.All mice were euthanized and tumor tissues were harvested at the end of 7 th week.The tumor volume was calculated as per the institutional IACUC recommended protocol as (W 2 x L)/2 (in which W=smallest diameter of tumor, L=largest diameter of tumor).Outer edge of the tumor was more than 2 cm away from nearest tumor and no tumor was identified in the tissue between tumors on inspection and palpation. Efficacy of CAN508 and Flavopiridol in xenografts of esophageal adenocarcinoma cells were studied in two separate experiments.One experiment compared efficacy of CAN508 (60 mg/kg given daily intraperitoneally for 10 days) with control and other experiment compared efficacy of Flavopiridol (4 mg/kg, given daily intraperitoneally for 8 days) with control.FLO-1 cells (4x10 6 cells per animal) were injected in the right flank of four weeks-old female athymic nu/nu mice.Once the tumor reached 5 mm in size, mice were randomized in to one of the two groups, one treated with CDK 9 inhibitor and another treated with control vehicle (DMSO) with 5 mice in each group.The volume of xenograft tumor was measured every three days using a digital caliper.All mice were euthanized and tumor tissues were harvested 21 days after treatment.The tumor volume was calculated as (W 2 x L)/2 (in which W=smallest diameter of tumor, L=largest diameter of tumor). Statistical analysis In vitro experiments were repeated at least three times.For each assay Student's t-test was utilized to compare the difference between groups in vitro and xenograft data and CDK9 staining intensity in patient samples of Barrett's esophagus and esophageal adenocarcinoma.Errors are S.E values of averaged results.P value < 0.05 was considered as significant. The study was performed under an approved institutional IRB with waiver of informed consent (LAB-04-0979, PI Maru) and IACUC (1155-RN00) protocols. Figure 1 : Figure 1: (A) Western blot showing CDK9 expression in the esophageal adenocarcinoma and normal squamous epithelial cell lines (HET-1A).Band intensity was measured with Photoshop software.Data is normalized with GAPDH and presented as the relative values to the HET-1A cells.(B-E) CDK9 protein expression by immunohistochemistry in matched samples of Barrett's Esophagus (1B, 100X magnification, 1C and 1D 200X magnification) and esophageal adenocarcinoma (1E, 100 X magnification). CAN508 ( 40 µM, 4h) and Flavopiridol (0.4 µM, 4h) significantly reduced signal of MCL-1 promoter bound to HIF-1α antibody as compared to control (DMSO) with very low signal for non-specific binding to RbIgG.Input DNA levels are unrelated to binding of DNA to protein and are primarily used a control (Figure6A and 6B).No change in HIF1-alpha protein was observed after treatment with Flavopiridol (0.2 and 0.4 µM, 4h) and CAN 508 (20 µM, 4h) and minimal decrease was observed with CAN 508 (40 µM, 4h, Figure 2 : Figure 2: Effects of genetic (shRNA) downregulation of CDK9.(A) The down-regulated CDK9 expression in SKGT4 cells by stable shCDK9 was confirmed by western blot (top panel).Effects of stable shCDK9 on cell proliferation in SKGT4.Cell proliferation was measured by MTS assay using the CellTiter Aqueous One Solution Cell Proliferation Assay kit (three separate experiments).(B) Effects of stable shCDK9 on the apoptosis of SKGT4 cells (three separate experiments).Cells stained with propidium iodide alone were considered necrotic, whereas cells stained with Annexin V with and without staining with propidium iodide were considered apoptotic.(C).Effects of stable shCDK9 on cell cycle stages in SKGT4 cells (three separate experiments) showing G1 arrest of shCDK9 SKGT4 vs. control.(D) Representative photograph of tumor xenografts and (E) the volume (means ± SE) of tumor xenografts at end of the experiment with stable shCDK9 and control SKGT4 cells.*represents p-value <0.05 compared to untreated controls.→: xenograft of shCDK9.►:xenograft of controls.(F) Western blot analysis of MCL-1, c-MYC expression and phosphorylation of RNA pol II in stable SKGT-4-shCDK9 and control cells.(G) Western blot analysis of MCL-1 and c-MYC expression and phosphorylation of RNA Pol II after transient shCDK9.Cell lysate from cells infected with lenti-shCDK9 for 72 hours were analyzed by western blot with antibodies against to MCL-1, c-MYC or phosphorylated RNA pol II at S2 site (two separate experiments). Figure 3 : Figure 3: Effects of pharmaceutical inhibition of CDK9 on cell proliferation, apoptosis and cell cycle.Cell proliferation was measured after the treatment of CAN508 for 72 hours (A, left panel) or Flavopiridol for 48 hours (A, right panel) by MTS assay using Cell Titer Aqueous One Solution Cell Proliferation Assay kit.The apoptosis (B) and cell cycle phases (C) of esophageal adenocarcinoma cells were determined by flow cytometry.Values are shown as means ± SE of 3 independent experiments.*represents p-value <0.05 compared to untreated controls. Figure 4 : Figure 4: Effects of pharmaceutical inhibition of CDK9 on EAC cell growth in nude mice.Nude mice with subcutaneously injected with 4x10 6 of FLO-1 cells.Mice bearing established xenograft were then treated with a dose of 60 mg/kg CAN508 by once a day intraperitoneal injection for 10 days (A) or a dose of 4 mg/kg of Flavopiridol by once a day intraperitoneal injection for 8 days (B).Tumor growth was measured by tumor volume.Data are presented as the percentage of tumor growth.*represents p-value < 0.01 compared to control group.Body weight was measured once a week and data is shown relative to the body weight at day of treatment starting. Figure 5 : Figure 5: Effects of pharmaceutical inhibition of CDK9 on RNA Pol II phosphorylation and MCL-1 expression.(A) Cells were treated with CAN508 or Flavopiridol for 4 hours.The phosphorylation of RNA Poll II and the expression of MCL-1 were examined by western blot.(B) Cells were treated with CAN508 or Flavopiridol for 4 hours after pretreatment with or without MG123 for 1 hour.Expression of MCL-1 was then detected by western blot.The band intensity was measured with Photoshop software.The data was normalized with host genes (GAPDH or β-actin) and presented as the relative values to corresponding controls.(C) The mRNA levels of MCL-1 were measured by Q-PCR after treatment of CAN508 (left panel) or Flavopiridol (right panel) for 4 hours at indicated dose.Value are shown as means ± SE of three independent experiments.*represents p-value <0.05 compared to untreated controls.(D).Cells were treated with CAN508 or Flavopiridol for 4 hours.The phosphorylation of RNAPII and the expression of c-MYC and CDK9 were examined by western blot. Figure 6 : Figure 6: Effects of pharmaceutical inhibition of CDK9 on the binding of HIF-1α to MCL-1 promoter.The binding of HIF-1α to MCL-1 promoter region was evaluated by ChIP assay in FLO-1 cells treated with 40µM of CAN 508 (A) or 0.4 µM of Flavopiridol (B) for 4 hours as described in Materials and Methods.Q-PCR results show the mean of triplicate for each treatment from a typical experiment; bars, ±SE.Similar results were observed in two independent experiments.(C) Western blot analysis of HIF1α expression after the treatment of CDK9 inhibitors for 4 hours.The band intensity was measured with Photoshop software.The data was normalized with β-actin and presented as the relative values to controls.
2018-04-03T00:51:46.331Z
2017-02-23T00:00:00.000
{ "year": 2017, "sha1": "3aa569503171f6df62ec783f2bbcbbeca5d77b44", "oa_license": "CCBY", "oa_url": "http://www.oncotarget.com/index.php?journal=oncotarget&op=download&page=article&path[]=15645&path[]=49974", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "3aa569503171f6df62ec783f2bbcbbeca5d77b44", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
233267944
pes2o/s2orc
v3-fos-license
ANALYSIS AND COMPARISON OF MACHINE LEARNING APPROACHES FOR TRANSMISSION LINE FAULT PREDICTION IN POWER SYSTEMS The transmission lines suffer from various faults subjected to numerous natural as well as manmade causes. This paper presents a proposed MATLAB-SIMULINK model for generation of such random disturbances. The output of the system is input to another python-based model in order to detect and predict the exact nature of disturbances using various artificial neural networks with their respective accuracy scores. This paper provides a brief comparison between Decision Tree Classifier, Random Forest Classifier, Support Vector Machines, K-Nearest Neighbors and Multi-Layer Perceptron methodologies for detection of line to ground fault, as an example in this model-based approach. Introduction We live in an era of ever-increasing power demand. Nowadays, every power utility is working hard with immense efforts to reduce the consequences of power failure and to reduce system downtime, keeping in mind that every transmission line has its own operating limits. Faults within a transmission line should be cleared as soon as possible to increase the overall reliability of the system [1][2]. Faults may occur in the transmission line for different reasons. Each type of fault has different phase angles, magnitude, and intensity at the sink point [3][4]. The sink point of fault may result in increase in the magnitude of phase current or decrease in the magnitude of phase voltage. Intensity of the fault depends on the type of fault occurring at that point e.g. Line to ground (L-G), Line to line (L-L), Double line to ground (L-L-G) or three-phase fault (L-L-L). Among all these faults, the most frequently occurring that is 70% of all faults are line to ground (L-G) fault [5][6][7][8]. When a particular kind of fault involves disturbances in all the three phases is termed as symmetrical fault, and another configuration of fault involving faults in one or two phases is termed as unsymmetrical fault. The need of the hour is to classify all kinds of fault in real-time to restore uninterrupted supply within the minimum possible time thereby increasing the reliability of the overall power system. In reality, the transmission line system consists of thousands of interconnected buses and protective equipment that makes the conventional study unsuitable for fault detection and classification accurately in real time. The conventional study includes applications of traditional distance relay as a parameter of study, which may introduce additional errors into the system. Fault classification is generally done by comparing the matrix values of current and voltage in a healthy phase with the help of fault time matrix values, requiring high computational power and software examining efficiency. [9][10][11][12] It takes unnecessary time in classification, and decisionmaking leading to decreased reliability of the overall system. In the present scenario, utilities and customers need high reliability of power systems. Hence, the system needs to be error-free, efficient, and able to take various autonomous decisions in case of a critical situation. This paper, introduces various machine learning approaches such as K-Nearest neighbors, Multilayer perceptron, Support vector machine, and Decision tree classifier for the classification and predictive analysis of the transmission line faults using the dataset matrices generated during normal and faulted condition. The output of this paper produces an accuracy score of the above-mentioned algorithms, compares among all the proposed Python-based models and concludes the best method for analysis and prediction of line to ground fault. Machine Learning Techniques Machine learning enables computers to make smart decisions without being explicitly programmed. It enables computers to predict a certain output based on some experience data sets. A machine may learn based on certain mapping function (supervised learning) or some clustering algorithms (unsupervised learning). Some machine learning algorithms also revolves around decision-making algorithms such as Decision-Tree Classifier and Random Forest Classifier. A decision tree classifier predicts the value of responses by learning decision rules that are derived from certain feature points. This paper provides a brief comparison between various supervised algorithms for predicting the line to ground fault. The methodology opted is supervised learning techniques that includes K-Nearest neighbors, Multi-layer perceptron, Support vector machine and Decision tree classifier. A supervised machine-learning algorithm requires optimizing datasets with clear-cut learning patterns to perform with a good accuracy score and to obtain fast processing capabilities. Dataset Filtering Transmission line fault simulation is performed using the MATLAB-SIMULINK platform. Datasets generated is exported to MATLAB workspace from SIMULINK, consisting of specific labels and specific features in RMS values of volts and amperes. Feature points consists of 3 sets of voltage and current -Va, Vb, Vc and Ia, Ib, Ic . Predictive Algorithm Supervised machine learning algorithms uses various learning patterns to feature sets of RMS values of voltage and current. This paper implements optimizing feature sets to strengthen the predictive ability of four algorithms namely KNN, SVM, Decision tree classifiers, and MLP. It also provides a brief comparison among these algorithms based on the Root-Mean-Square (RMS) error, and accuracy score obtained upon experimentation on LG fault. K-Nearest Neighbors (KNN) KNN is a non-parametric and lazy learning tool used for regression and classification of predictive problems. K' in KNN is the number of nearest neighbors to include in the majority voting process for the similarity measure. The algorithm is based on the feature similarity process choosing the right value of 'K' by parameter tuning that is very important for improved accuracy. In this paper, the Knearest neighbor works within a python module K-Neighbors Classifier. This classifier works as a clustering algorithm that map the distance between various feature sets. 'K' value is varied between the limits i.e. 1 and 25. Support Vector Machines (SVM) Usually, it is much easier to classify patterns that are linearly separable, that is a hyperplane separating the classes can be formulated so that the patterns belonging to a particular class lie in a distinct side of the hyperplane. But if the patterns are not linearly separable, the classification task becomes much more difficult. The SVM is capable of classifying both linearly and non-linearly separable patterns. A hyperplane is formulated using an instance object, which fits the dataset according to the classes. It revolves around the idea of finding a hyperplane that best separate features into different domains. The point closest to the hyper-plane is called support vectors and the distance of the vectors from the hyper-plane is called the margins. The SVM seeks to draw an optimal hyperplane between the classes that maximize the margin of separation between the classes, so that the number of misclassified classes is reduced. In this paper, Radial Basis Function (RBF) is used as a non-linear kernel function for the SVM model. SVM works fine with both linear and non-linear kernel functions using sklearn. SVM module runs on anaconda-python IDE. Multi-Layer Perceptron (MLP) MLP is a kind of supervised learning technique principally working with backpropagation algorithm. MLP neural networks use a gradient descent approach to update their iterative weights in a feed-forward neural network, so that after training and testing, the MLP captures the inherent characteristics of the training data and can act as a nonlinear model of the actual system, in this case, a fault classifier. In this paper, MLP is used to separate nonlinearly separable data using a non-linear activation function using sklearn.neural_network running on anaconda-python IDE. Decision Tree (DT) In this method, a supervised and non-parametric method is used to classify feature sets and is based on a decision tree rule traversing to multiple nodes. In this paper, the decision tree is imported using sklearn.tree module in anacondapython IDE for experimentation. Random Forest (RF) RF is a supervised learning technique that consists of multiple decision trees with the same nodes but every node leads to a different leaf node. Random Forest in general is a bunch of decision trees with an average of all trees as their output. Here, the Random Forest Classifier is implemented using sklearn.ensemble.Random Forest-Classifier module in anaconda-python IDE. Faults in transmission line As discussed in previous sections, the use of machine learning techniques can very well enhance the overall reliability of the power systems as it can precisely predict the nature of fault occurring in the transmission line thereby helping utilities in fault detection, isolation, and clearance procedure within the minimum time possible. Causes of Fault Faults are unavoidable as well as random in occurrence. Among all the power system equipment, transmission line is most exposed to environment. Hence, the transmission line is more prone to faults compared to any other equipment that affects its stability and operating limits. Over-loading is also a catalytic factor, which leads to insulation breakdown at an early stage. In this paper, the line to ground fault is taken as an experimenting factor for the predictive models mostly because the majority of the faults occurring in transmission lines are line to ground in nature. The physical damage to the conductor may be due to natural reasons, which results in the contact of one of the three phases with the ground. Further, sections of this paper consist of simulation of transmission line using MATLAB-SIMULINK in normal conditions as well as in line to ground fault conditions to generate specific datasets in CSV file format. Dataset acts as an experience feature sets for the respective predictive algorithms to generate an accuracy score and Root-Mean-Square error value. The waveform in Fig. 4 is the output of the simulation of the power system model in no fault condition. System Modelling for Fault Datasheet Generation The occurrences of single line to ground fault take place when one of the phases of the three-phase line gets short with the ground. At the time of occurrence of the fault, the impedance need not be zero but a very minute value in accordance with the line impedance. The numeric quantities of the 3 phase voltages Va, Vb, Vc and currents Ia, Ib, Ic are fed after having been generated in both the normal and the faulty condition. Then the data is tabulated and exported as a CSV file from workspace. A snapshot of a CSV sheet having the data in the normal and faulty condition is shown in Fig. 6 and Fig. 7. The training and testing datasets are given as zero signifies healthy network and one signifies faulty network. The data is subsequently fed into the machine-learning algorithm for training. Machine learning Algorithm Design and Accuracy Count The main objective of this paper is to thrive a machine learning based autonomous self-learning system that has the JREAS, Vol. 06, Issue 01, Jan 2021 capability of self-acquisition of knowledge in real time with a little supervision. In this paper, the evaluation of different algorithm is done by the accuracy score and mean squared error, which is mostly used having multi-labels, and the result is measured in percentage. Here the accuracy can be represented as The accuracy is found by dividing the number of matches by the number of samples. From the given list of y_predict and y_true, for sample index value of 'i' is compared to find matches. Based upon the number of matches accuracy is calculated. Here the root mean square error can be represented as Root mean square error measures the average magnitude of the error as a square root of the average squared differences between prediction and actual observations. Implementation of Decision Tree Classifier A sequence of test cases and different conditions is being organized in a tree structure in the decision tree classifier model and the classification takes place based on decision rules. (a) (b) Fig. 8: Predicted (a) and Testing (b) Labels of Decision Tree The Fig. 8 shows the predicted and testing labels of decision tree classifier plotted which a non-parametric method of supervised learning is. Prediction points for datasets in red line and all testing data points represented in blue line. '0' level points indicate training and testing data sets for normal operating condition. '1' indicates training and testing datasets for fault conditions. The training dataset is fed into decision tree classifier and the testing dataset was predicted by the classifier with accuracy up to 86.17%. Implementation of Support Vector Machines Support Vector Machines (SVM) is a supervised learning algorithm, which fits the data in accordance to the classes after finding a hyperplane and does a distinct classification of data points. (a) JREAS, Vol. 06, Issue 01, Jan 2021 (b) Fig. 9: Predicted (a) and Testing (b) Labels of SVM The Fig. 9 shows the predicted and testing labels of support vector machine plotted after separation of data points in different classes by a hyperplane. Prediction points for datasets in red line and all testing data points represented in blue line. '0' level points indicate training and testing data sets for normal operating condition. '1' indicates training and testing datasets for fault conditions. The training dataset is fed into support vector machine classifier and the testing dataset was predicted by the classifier with accuracy up to 75.94%. Implementation of K Nearest Neighbor K Nearest Neighbor is a supervised lazy and nonparametric learning algorithm use for predictive problems classification having a class membership as its output, which uses the distance for classification. Fig. 10 shows the predicted and testing labels of K Nearest Neighbor plotted by assignment of weights for the contributions of the neighbors, where the nearest neighbors has more contribution. Prediction points for datasets in red line and all testing data points represented in blue line. '0' level points indicate training and testing data sets for normal operating condition. '1' indicates training and testing datasets for fault conditions. The training dataset is fed into K Nearest Neighbor classifier and the testing dataset was predicted by the classifier with accuracy up to 88.89%. Implementation of Multi-Layer Perceptron Multi-Layer Perceptron (MLP) provides a mapping which is nonlinear in midst of an input and an output vector and uses a nonlinear activation function. It employs a supervised learning technique called backpropagation for training purpose. (a) JREAS, Vol. 06, Issue 01, Jan 2021 (b) Fig. 11: Predicted (a) and Testing (b) Labels of MLP The Fig. 11 shows the predicted and testing labels of Multi-Layer Perceptron plotted by utilizing nonlinear activation function and backpropagation for training. Prediction points for datasets in red line and all testing data points represented in blue line. '0' level points indicate training and testing data sets for normal operating condition. '1' indicates training and testing datasets for fault conditions. The training dataset is fed into Multi-Layer Perceptron classifier and the testing dataset was predicted by the classifier with accuracy up to 78.53%. Implementation of Random Forest Classifier Random forest Classifier is also a supervised learning algorithm. It creates many decision trees, takes the prediction value from each of them, and among them selects the best result by voting. The Fig. 12 shows the predicted and testing labels of Random forest Classifier plotted by getting the mean prediction of each of the trees. Prediction points for datasets in red line and all testing data points represented in blue line. '0' level points indicate training and testing data sets for normal operating condition. '1' indicates training and testing datasets for fault conditions. The training dataset is fed into Random forest Classifier and the testing dataset was predicted by the classifier with accuracy up to 85.55%. Analysis of Results The five algorithms, namely, Decision Tree Classifier, Support Vector Machines classifier, K Nearest Neighbors Classifier, Multi-Layer Perceptron and Random Forest Classifier were implemented to the whole dataset by splitting it into training and testing part. The comparison is done based on the accuracy score where K Nearest Neighbors gave the best accuracy, which is close to 89 percent, whereas Support Vector Machines did not perform well producing accuracy close to 76 percent. JREAS, Vol. 06, Issue 01, Jan 2021 Conclusion This paper provides a predictive model for the detection of faults in transmission lines. This predictive model uses phase currents as input to the system of neural network. The outcome of this predictive model provides a suitable algorithm for the designing of a protective stratagem for transmission line based on the machine-learning algorithm. Our method being reliable and feasible, modelling of transmission line can be done. Support vector machines are supposed to perform well in small feature-sets, but this is not always true. In cases where the dataset is not separable by a single curve, SVM will perform worse than other neural networks and by feeding more data, MLP will perform naturally better than SVMs. From this, it is concluded that dataset is not perfect curve separable, and forms small clusters in feature space where KNN often gives results in clustered data as in this case.
2021-04-16T14:29:57.377Z
2021-01-21T00:00:00.000
{ "year": 2021, "sha1": "7fcd4dffe32d7e53279ca67f79d25f9b5dd1010d", "oa_license": "CCBY", "oa_url": "https://doi.org/10.46565/jreas.2021.v06i01.005", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "7fcd4dffe32d7e53279ca67f79d25f9b5dd1010d", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
249240323
pes2o/s2orc
v3-fos-license
Neural Improvement Heuristics for Preference Ranking In recent years, Deep Learning based methods have been a revolution in the field of combinatorial optimization. They learn to approximate solutions and constitute an interesting choice when dealing with repetitive problems drawn from similar distributions. Most effort has been devoted to investigating neural constructive methods, while the works that propose neural models to iteratively improve a candidate solution are less frequent. In this paper, we present a Neural Improvement (NI) model for graph-based combinatorial problems that, given an instance and a candidate solution, encodes the problem information by means of edge features. Our model proposes a modification on the pairwise precedence of items to increase the quality of the solution. We demonstrate the practicality of the model by applying it as the building block of a Neural Hill Climber and other trajectory-based methods. The algorithms are used to solve the Preference Ranking Problem and results show that they outperform conventional alternatives in simulated and real-world data. Conducted experiments also reveal that the proposed model can be a milestone in the development of efficiently guided trajectory-based optimization algorithms. Introduction Combinatorial Optimization Problems (COPs) are present in a broad range of real-world applications, such as logistics, manufacturing or biology [1]. Due to the NP-hard nature of most of the COPs, finding the optimal solution to medium/large sized problems is intractable [2]. As a result, in the last few decades, heuristic methods have been established as an alternative to approximate hard optimization problems in a reasonable amount of time. Despite being widely used, a major drawback of such methods is the need to exhaustively evaluate a large amount of candidate solutions. In the last decade, the rapid development of Deep Learning (DL) has enabled the creation of neural network-based models that can substantially decrease the computational effort in optimization. Two main frameworks can be distinguished: constructive methods and improvement methods. The former class generates a unique solution incrementally by iteratively adding an item to a partial solution until it is completed. Conversely, improvement methods take a candidate solution and suggest a modification to improve it. In fact, the improvement process can be repeated iteratively, using the modified solution as the new input of the model. These methods can potentially reduce the extensive search only focusing on trajectories leading to optimal solutions. In this paper we focus on improvement methods and propose a Neural Improvement (NI) model for graph-based combinatorial problems. This model encodes a given solution by means of edge features and proposes a modification on the pairwise precedence of some items, with the aim of obtaining a better candidate. To illustrate its usage, we apply the model to solve the Preference Ranking Problem (PRP), an NP-hard problem that tries to find an optimum pairwise consensus in a set of items. In particular, this paper makes the following contributions: • Presents a Reinforcement Learning formulation for a Neural Improvement model parameterized by Graph Neural Networks and Attention Mechanisms. • Introduces a novel encoding-decoding process which considers as input both the stationary instance information and a candidate solution to be improved by means of the graph-edges. • Conducts an exhaustive analysis on the short-term (one improvement step) and long-term (multiple improvement steps) inference capabilities of the Neural Improvement model for the PRP. Moreover, we develop a Neural Hill Climbing based on the presented model and compare its performance to conventional procedures. • We demonstrate the applicability of the model by using it as the building block of two advanced Hill climbing methods: Tabu Search and Iterated Local Search. Related Work Neural Networks (NN) have been used since the decade of the 80s to solve COPs in the form of Hopfield Networks [3]. However, as seen in recent surveys [4,5], the growth in the computation power and the development of advanced architectures in the last decade has enabled more efficient applications that are getting closer to state-of-the-art algorithms. As mentioned previously, NN-based optimization methods can be divided into two main groups according to their strategy. Neural Constructive Methods Most of the DL-based works develop policies to learn a constructive heuristic. These methods start from an empty solution and iteratively add an item to the solution until a certain stopping criterion is met. One of the earliest works in the Neural Combinatorial Optimization paradigm, Bello et al. [6] used a Pointer Network model [7] to parameterize a policy that constructs a solution, item by item, for the Travelling Salesman Problem (TSP). Motivated by the results in [6], and mainly focusing on the TSP, DL practitioners have successfully implemented different architectures such as Graph Neural Networks [8,9] or Transformers [10,11]. As previously seen in the Operations Research literature, the solutions obtained by these proposals are not competitive with state-of-the-art methods [12]. In fact, since the performance of these models is still far from optimality (mostly in large size instances), they are usually enhanced with supplementary algorithms that augment the solution diversity at the cost of increasing the computational time. A common practice is to use active search [6], where rewards obtained from the evaluation instances are used to fine-tune the parameters of a model. An alternative is to use sampling [6,10] or beam search [7] to further explore the neighborhood of the solution proposed by the model. Neural Improvement Methods Improvement methods, also known as trajectory methods, are initialized with a given solution, and iteratively propose a (set of) modification(s) to improve it until the solution cannot be further improved. Neural Improvement (NI) methods utilize the learned policy to navigate intelligently across the different neighborhoods. To that end, the architectures used for constructive methods have been reused for implementing improvement methods. Chen et al. [13] use LSTMs to parameterize two models: a model outputs a score or probability to each region of the solution to be rewritten, while a second model selects the rule that modifies that region. Lu et al. [14] use the Transformer model to select a local operator among a pool of operators to solve the capacitated vehicle routing problem (VRP). Finally, Wu et al. [15] use a similar architecture to train a policy that selects the node-pair to apply a local operator, e.g., 2-opt. Improvement methods not only incorporate the stationary instance data, but also need to consider the present solution. In fact, the difficulty of encoding the solution information into a latent space to be understandable to the model is a major challenge for most of the combinatorial problems. In routing problems there are various ways of representing solutions. Each node (or city) can maintain a set of features that indicate the relative position in the current solution, such as the location and distance to the previously and subsequently visited nodes [14]. However, this technique does not consider the whole solution as one, instead it only contemplates consecutive pairs of nodes in the solution. A common alternative when using the transformer architecture is to incorporate Positional Encodings (PE), which capture the sequence of the visited cities in a given solution [10]. Recently, Ma et al. [16] propose a cyclic PE that captures the circularity and symmetry of the routing problem, making it more suitable in representing solutions than the conventional PE. However, in some graph problems, the edges can provide useful information that it is not contained in the nodes, namely, the relative information among nodes. Moreover, in some problems the essential information is codified in edges, and thus, prior methods that focus on node embeddings [14,16] are not capable of properly encoding the relative information. In the following section, we will present the optimization problem that illustrates the need to develop neural models that consider edge features. Preference Ranking Problem Ranking items based on preferences or opinions is, in general, a straightforward task if the number of alternatives to rank is relatively small. Nevertheless, as the number of alternatives/items increases, it becomes harder to get full rankings that are consistent with the pairwise item-preferences. Think of ranking 50 players in a tournament using their paired comparisons from the best performing player to the worst. Obtaining the ranking that agrees with most of the pairwise comparisons is not trivial. This task is known as the Preference Ranking Problem (PRP) [17]. Formally, given a preference matrix B = [b ij ] N ×N where entries of the matrix b ij represent the preference of item i against item j, the aim is to find the simultaneous permutation of rows and columns of B so that the sum of entries in the upper-triangle of the matrix is maximized (see Eq. 1). Note that row i describes the preference vector of item i over the rest of N − 1 items, while column i denotes the preference of the rest of the items over item i. Thus, in order to maximize the upper triangle of the matrix, preferred items must precede in the ranking (see Fig.1 a). Alternatively, the problem can be formulated as a complete bidirected graph where nodes represent the set of items to be ranked and the weighted edges denote the preference between items. A pair of nodes i and j has two connecting edges (i, j) and (j, i), with weights b ij and b ji that form the previously mentioned preference matrix B. A solution (permutation) to the PRP can be also represented as an acyclic tournament on the graph, where the node (item) ranked first has only outgoing edges, the second in the ranking has 1 incoming edge and the rest are outgoing, and so on until the last ranked node, which only has incoming edges (see Fig.1 b and c). Applications Ranking from pairwise comparisons is a ubiquitous problem in modern Machine Learning research. It has attracted the attention of the community due to its applicability in various research areas, including but not being limited to: machine translation [18], economics [19], corruption perception [20] or any other task requiring a ranking of items, such as sport tournaments, web search, resource allocation and cybersecurity [21,22,23]. , where a policy π is responsible for selecting an action a at each step t based on a given state s t of the problem. The main entities of the MDP in this work can be described as: • State. A state s t represents the information of the environment at step t. In this case, the state gathers data from two information sources: (1) stationary data, which the PRP instance to be solved, and (2) dynamic data, that is, a candidate solution ω to the problem at step t. • Action. At every step, the learnt policy selects an action a t , which involves a pair of items that, according to the policy, are incorrectly ranked. Once selected, an operator is applied that alters the precedences of one, both or more items 1 . • Reward. The transition between states s t and s t+1 is derived from an local operator applied in a pair of items given by a t . The reward function represents the improvement of the solution quality across states. Different function designs can be used, as will be explained in Section 4.2. NI Model We will parameterize the policy π as a NN model with trainable parameters θ. Fig. 2 presents the general architecture of the model, composed of two sub-models: an encoder and a decoder. Encoder Given a graph instance of size N that represents a PRP, there are N × N edges or node pairs, and each edge (i, j) represents the precedence of node i with respect to node j. Note that only N × (N − 1) edges need to be considered since edge (i, i) does not provide any useful information. As previously noted, the policy considers both instance information (stationary) and a candidate solution at time step t (dynamic). For this purpose, we will use a bi-dimensional feature vector x ij ∈ R 2 for each edge (i, j). The first dimension in x ij denotes the element b ij of the preference matrix when node i precedes node j in the solution, and it is set to zero otherwise. Similarly, the second dimension denotes the element b ji of the preference matrix when node j precedes node i in the solution, and it is set to zero otherwise. Conversely, nodes do not reflect any problem-specific information. In fact, following a similar strategy to that proposed by Kwon et al. [24], we select to use vectors of ones as node features n ∈ R N . Even though all nodes are initiated with the same value, these features will help to spread edge features across the graph in the encoding: node i will gather information from edges ik and ki where k : {1, ..., N }. Node and edge features will be linearly projected to produce d-dimensional node h i and edge e ij embeddings where V e ∈ R 2×d and V h , U e and U h ∈ R 1×d are learnable parameters. The encoding process consists of L Graph Neural Network (GNN) layers (defined by the superscript l) that perform a sequential message passing between nodes and their connecting edges (see left part of Fig. 2). Eqs. 4 and 5 define the message passing in each layer, where W l 1 , W l 2 , W l 3 , W l 4 and W l 5 ∈ R d×d are learnable parameters, BN denotes the batch normalization layer, σ is the sigmoid function and is the Hadamard product. The output of the encoder, which is fed to the decoder, consists of the edge embeddings of the last layer e L ij , plus the graph embedding e G = 1 N j=1 e L ij , which is an aggregation of the edge embeddings. Decoder Graph embeddings are projected to form the query (Q = W g e G ) of a multi-head attention mechanism (MHA) [25]. The MHA mechanism measures the compatibility of the query with each one of the keys or edge embeddings (K = V = e ij ). The output is a context vector e c = MHA(Q, K, V ) that is again refined in a second attention layer to produce the logits u ij ; where the tanh function is used to maintain the logits within [−C, C] (C = 10). The logits are then normalized using the Softmax function to produce a matrix p ∈ R N ×N which gives the probability of modifying the precedence between items i and j. The model will be set to sample from the probability matrix during training and to select the action with maximum probability during inference. Learning Loss Function The improvement policy will be learned using the REINFORCE algorithm [26]. Given a state s t = (B, ω t ) which includes an instance and a candidate solution at step t, the model gives a probability distribution p θ (a t |s t ) of all the possible pairwise preferences to be modified. After performing an operation O(ω t |a t ) = ω t+1 with the selected pair, a new solution ω t+1 is obtained. The training is performed minimizing the loss function by gradient descent, where R t = corresponds to the sum of cumulative rewards r i with a decay factor γ in an episode of length T (see Appendix A for further details). In order to obtain r, different Reward Functions (RF) can be found in the literature. Lu et al. [14] use a reward function (RF1) that takes the objective value of the initial solution as the baseline and, for each subsequent action, the reward at step t is defined as the difference between f (ω t ) and the baseline. The drawback of this function is that rewards may get larger and larger and even bad moves far from the baseline can be defined as positive rewards. Alternatively, the most common approach in recent works [15,16], is to define the reward (RF2) as is the objective value of the best solution found until time t. Note that this alternative yields only non-negative rewards, and all the actions that do not improve the solution receive an equal reward r t = 0. In our case, we use a simple but effective reward function (RF3) r t = f (ω t+1 ) − f (ω t ) which defines the reward as the improvement of the objective value between steps t and t + 1, and also considers negative values. RF3 reward function yields a faster convergence with less variability as can be seen in the comparison of the mentioned reward functions depicted in Fig. 3 (more details in Appendix B.1). Figure 3: Training curves using different reward functions. Automated Curriculum Learning Curriculum Learning consists of training the models in a controlled manner, where the difficulty of the samples is manually increased throughout the process [27]. In this case, the difficulty can be defined as the percentage of moves that worsen the objective value with respect to all the possible moves. We do not make use of a manual curriculum learning strategy, where the difficulty of the inputs fed to the model is manually increased, as it is done by Ma et al. [16]. Instead, we iteratively fed the previously modified solution to the model without any step limit, and finalize the epoch once the model gets stuck in local optima. This enables an automated curriculum learning that does not require any additional hyperparameter. However, learning is performed with a batch of instances and not all of them reach a local optima in the same number of steps. Thus, we save the best average reward obtained by the model, and we consider the algorithm to be stuck when it does not improve the best average reward for K max = 5 iterations. Operator The model is flexible, allowing the practitioner to define the operator that best fits the problem, we will use the insert operator 2 , as it yields good results for the PRP [28]. See Appendix A.1 for further details on the operator. Applications of the Neural Improvement model Neural Hill Climber The Hill Climbing heuristic (HC) is a procedure that continuously tries to improve a given solution performing local changes and looking for better candidate solutions in the neighborhood. Examples of conventional HC procedures include, among others, Best First Hill Climbing (BFHC), that selects the first candidate (neighbor) that improves the present solution; Steepest-Ascent Hill Climbing (SAHC), which selects the best candidate solution from the whole neighborhood; and Stochastic Hill Climbing (SHC), that randomly picks one solution from the neighborhood. We propose the Neural Hill Climber (NHC), which iteratively gives the pair of items with maximum probability of being modified. Once the local operator is performed in the given pair, the new solution is again fed to the model. In general, HC heuristics do not allow the objective value to decrease. Thus, when the action given by the model does not improve the solution, we sort the probability vector and select the first action that improves it. Eventually, just as conventional HC procedures, the neural method will get stuck in a local optima where an improving move cannot be found. In this case, an alternative is to restart the procedure departing from another random candidate solution. Advanced Hill Climbers Beyond the HC, the NI model can be used as the core of numerous methods to create intelligently guided algorithms. One of the many examples is the Tabu Search (TS) [29], which enhances the performance of the HC method allowing worsening moves whenever a local optima is reached. Instead of restarting the solution, a move in the neighborhood is made with the goal of finding a better optima. In order to avoid getting trapped in cycles, TS maintains a tabu memory of previously visited states to prevent visiting them again. Another algorithm is the Iterated Local Search (ILS). Once the search gets stuck in a local optima, instead of restarting the algorithm with a new random solution as in standard HC algorithms, ILS perturbs the best solution found so far and the search is resumed in this new solution. The perturbation level is dynamically changed based on the total budget left (number of evaluations or time). We use the NI model to guide the local moves of a Neural Tabu Search (NTS) and a Neural Iterated Local Search (NILS) and analyse their performance together with NHC, comparing them to conventional alternatives in the following section 3 . Experiments In this section, we present a thorough experimentation of the proposed NI model. First, we perform experiments to analyse its short-term performance (one-step) and, then, we evaluate the long-term (multi-step) capabilities of the model implemented in the different algorithms: NHC, NTS and NILS. Setup For the experiments we follow a common practice, training the model using randomly generated instances, where gradients are averaged from a batch of 64 instances. We train two models using instances of two sizes: N = 20 and N = 40 4 . We use small model sizes due to reduced computational resources. In fact, we prove the generalization capability of these models to infer larger instances than those used for training. If not mentioned differently, the model trained with instances of size (N = 20) will be used. See Appendix A.2 for more details on the hyperparameters. We have implemented the algorithms using Python 3.8. Neural models have been trained in an Nvidia RTX 2070 GPU, while methods that do not need a GPU are run on a cluster of 55 nodes, each one equipped with two Intel Xeon X5650 CPUs and 64GB of memory. See the supplementary material for the code implementation. NI Model Performance Analysis One-step Short-term analysis focuses on the capability of the NI model to provide a solution that outperforms the present one. In terms of neighborhood, we would expect NI to be able to identify the best or almost some of the best neighbors. Fig. 4a sorts the rewards of all the possible operations obtained in 9 subsequent optimization steps, while the reward obtained by the model is highlighted by a vertical line. The model is capable of reaching more difficult situations over time, which can be noted by the left shift of the distribution. In fact, the lack of improvement moves in the last step (bottom-right corner) forces the model to select a negatively rewarded move. Moreover, if we were to take all the possible actions and sort them based on the obtained improvement performing each particular action, we could rank the model selection. Fig.4b shows an histogram of selected-action rankings among all the possibilities 5 . Considering that only (N −1) 2 insert operations are valid, on average, the action that the model takes is ranked between the 98 th and 99 th percentile (5 th out of 361 for N = 20, 13 rd out of 2401 for N = 50 and 31 st out of 9801 for N = 100). More details can be found in Appendix B.2. Multi-step The NI model learns to consistently improve the solution even in difficult situations. However, we still need to analyse its performance as the building block of a HC algorithm. For that purpose, we implement a NHC guided by the NI model trained with instances of size 20 as described in Section 4. We compare NHC to conventional HC procedures: BFHC and SAHC. The multi-step performance assessment will be performed considering a bi-objective problem: (1) obtain a high quality solution and (2) Performance The second experiment examines HC algorithms explained in Section 4.3. The performance of the methods is measured for differently sized instances with three different maximum number of evaluations: N, 10N and 100N, where N is the size of the instance to be solved. In addition to the previously mentioned methods, we incorporate: an additional HC procedure, the Stochastic Hill Climbing (SHC), that randomly picks a candidate from the neighborhood; a NHC trained using instances of size 40 (NHC-40); a conventional Tabu Search algorithm with an underlying Best-First strategy (BFTS); and finally, the Neural Tabu Search(NTS) algorithm. The performance is measured by means of the average gap percentage to the optimum value, given by the state-of-the-art metaheuristic [30]. For each size, we use 1280 different, randomly generated instances. As shown in Table 1, among the conventional HC methods, BFHC is the best performing one. SAHC suffers from its exhaustive search procedure and SHC performs poorly due to the lack of an improving strategy in its search. The NHC model is capable of finding better solutions than BFHC. Regarding NHC models, note that they perform better in those instance-sizes used for training. Fig. 5 Real-World Case: NBA Historical Ranking Here, we use our model in a real-world application. Specifically, we use the historical data of the NBA from the 2004 season until 2020 with the aim of ranking all the NBA teams based on their historical performance 6 . The preference matrix B is formed by the pairwise comparisons between 30 NBA teams such that the preference of team A against team B is defined as the sum of matches that In total, 25,697 matches are considered. The entries of the instance matrix are normalized between 0 − 1. The NHC model is able to give the solution that maximizes the objective value without any restart in a few seconds. The optimal objective value for the normalized matrix is 219.58, the ranking of the teams is shown in the Appendix B.5. Note that using different conditions to define the preference matrix may change the ranking, i.e., using points for and against instead of won matches. In fact, a team that wins most of the matches with a small advantage would be ranked poorly when only considering difference of points. Conclusion and Future Work This paper proposes a Neural Improvement model that employs an edge-based encoding and decoding framework. This model combines the benefits of steepest ascent and best first, being able to provide a solution (almost) as good as that provided by SA, and as fast (or even faster) than BF. This has major implications for most of the state-of-the-art metaheuristics, which could use NI models instead of neighborhood-based, conventional, procedures. From the conducted experiments, we have observed that the output of our model tends to be categorical, i.e., the model is usually certain of its selection (even if it is not the optimal). This drastically reduces the diversity of the solutions, since the model completes similar trajectories across different executions. This and other aspects require a more sound analysis, and thus we plan to investigate the following strategies: (1) Using a supplementary NN model that detects the most visited solutions, and transfers this information to the main model in order to avoid visiting solutions with similar features repeatedly; (2) Using a population of models that collaborate (or compete) for optimizing a given instance. Regarding the operators used, in this case, the insert operators was chosen with literature support, however, it is not clear if any operation can be accurately learned by the NI model. This is another aspect that requires further research. Finally, we believe that obtaining the state-of-the-art using neural improvement heuristics will be feasible in the future, but still considerable effort needs to be devoted to the correct implementation of complex neural networks in C++ code.
2022-06-02T15:02:45.730Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "4e7605425fac46df1cc9d72269a6076375b51720", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "4e7605425fac46df1cc9d72269a6076375b51720", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
43220057
pes2o/s2orc
v3-fos-license
Fatty acids composition of fruits of selected Central European sedges , Carex L . ( Cyperaceae ) Sedges (Carex L., Cyperaceae) represent one of the most common vascular plant groups in the world. They occur in very different habitat conditions, both in wet and moist locations such as peat bogs, fens, meadows and pasture communities as well as their peripheries. They also exist in dry and extremely dry habitats which include among others xerothermic and psammophilous grasslands. Many of these habitats constitute or may potentially constitute areas of agricultural use, for example as pastures for cattle. In spite of the fact that such diversified habitat types are dominated by the representatives of one genus and in spite of the high biocenotic importance of sedges, the practical utilization of these plants is not great. Only in some regions of the world, few species are used as fodder, or they are sown in meadow grass mixtures (i.e. Ingvason, 1969; Herman, 1970; Fox, 1991). However the results of an increasing number of studies on the chemical composition of sedges indicate high nutritive values. They include among others macro and microelements (e.g. Catling et al., 1994; Grzelak et al., 2005; Janyszek et al., 2005), flavonoids (Kukkonen, 1971; Manhrt, 1986), oligostilbenes (Suzuki et al., 1987; Kawabata et al., 1989; Hegnauer, 1986; Kurihara et al., 1990; Kawabata et al., 1991; Kurihara et al., 1991) alkaloids (Hegnauer, 1986), phenolic acids (Li, 1974; Bogucka-Kocka and Krzaczek, 2004), essential oils and saponins (Hegnauer, 1963). Substances which also have a great importance in nutrition are fatty acids contained in the seed oils of many plant species which have been used since a long time ago in food and pharmaceutical industries as well as in the cosmetic industry. The unsaturated fatty acids are particularly valuable. They are indispensable for a correct metabolism, but, RESUMEN INTRODUCTION Sedges (Carex L., Cyperaceae) represent one of the most common vascular plant groups in the world.They occur in very different habitat conditions, both in wet and moist locations such as peat bogs, fens, meadows and pasture communities as well as their peripheries.They also exist in dry and extremely dry habitats which include among others xerothermic and psammophilous grasslands.Many of these habitats constitute or may potentially constitute areas of agricultural use, for example as pastures for cattle.In spite of the fact that such diversified habitat types are dominated by the representatives of one genus and in spite of the high biocenotic importance of sedges, the practical utilization of these plants is not great.Only in some regions of the world, few species are used as fodder, or they are sown in meadow grass mixtures (i.e.Ingvason, 1969;Herman, 1970;Fox, 1991).However the results of an increasing number of studies on the chemical composition of sedges indicate high nutritive values.They include among others macro and microelements (e.g.Catling et al., 1994;Grzelak et al., 2005;Janyszek et al., 2005), flavonoids (Kukkonen, 1971;Manhrt, 1986), oligostilbenes (Suzuki et al., 1987;Kawabata et al., 1989;Hegnauer, 1986;Kurihara et al., 1990;Kawabata et al., 1991;Kurihara et al., 1991) alkaloids (Hegnauer, 1986), phenolic acids (Li, 1974;Bogucka-Kocka and Krzaczek, 2004), essential oils and saponins (Hegnauer, 1963). Substances which also have a great importance in nutrition are fatty acids contained in the seed oils of many plant species which have been used since a long time ago in food and pharmaceutical industries as well as in the cosmetic industry.The unsaturated fatty acids are particularly valuable.They are indispensable for a correct metabolism, but, RESUMEN Composición de ácidos grasos de frutos de juncos (Carex L., Cyperaceae) seleccionados de Europa Central. Fatty acids composition of fruits of selected Central European sedges (Carex L. Cyperaceae). Fatty acids in the fruits of 13 sedge species (Carex L., Cyperaceae) were analyzed.The oil contents in the fruits of the studied sedges ranged from 3.73 and 46.52%.In the studied fruit oils 14 different fatty acids were identified.The main unsaturated fatty acids were: linoleic, α-linolenic, oleic, oleopalmitic n-7; oleopalmitic n-9, octadecenic, and eicosenoic acids.The following acids were found in the greatest quantities: linoleic, oleic, α-linolenic and palmitic acids.Based on the fatty acid composition, studied taxa can be divided in two groups.The first group (C.flava, C. pseudocyperus, C. riparia, C. leporina) is a very good source of linoleic acid.The second group, including the remaining species, is a good source of α-linolenic acid.The highest oleic acid contents were observed in C. vulpina.The studied material has shown a low concentration of saturated fatty acids, among which palmitic acid was the main one. Fatty acids composition of fruits of selected Central European sedges, Carex L. (Cyperaceae) By Anna Bogucka-Kocka a and Magdalena Janyszek Extraction of oil Fatty acids were obtained by cold hexane extraction from disintegrated fruits.Fatty acid fractions were taken using cold hexane.Disintegrated raw material was flooded with a 5-fold amount of hexane and it was macerated for 24 hr at room temperature in the dark.After that time the samples were filtered and again they were flooded with an adequate amount of solvent.This procedure was repeated three times.Then, the extract was pooled, hexane was removed (vacuum rotary evaporation, 35 °C), and the extract was filled into glass ampoules and vacuum packed.These samples were stored at Ϫ20 °C until the time of analysis.Each analysis for each species was performed in three replicates which with 3 collected utricle populations for each taxon made 9 independent tests.Averaged values are reported (Tables 1, 2). Fatty acid occurrences were determined in the analyzed samples by HR-GC methods.After standard methylation, fatty acid methyl esters were analyzed by a GC Agilent 6890 gas chromatograph equipped with a column RTX 2330 Restek 100 m, calibre 0.25 mm, temperature of injector port 240 °C, in column 175 °C, in detector port 250 °C, carrier gas -helium 11/1 min., injection of split 1 µl/ cm 3 .Content percentage was estimated by internal normalization. Statistical Analysis The obtained results of the qualitative and quantitative fatty acid fractions were subject to cluster analysis by Ward's method using Statistica 6.0.The analyzed variables included standardized concentrations of the particular fatty acids.Mutual similarities of the particular species are presented in the dendrogram (Fig. 1). RESULTS AND DISCUSSION In the examined seed oil samples the occurrence of 14 fatty acids was confirmed (Tables 1, 2).In the studied Carex oils monounsaturated (MUFA) and polyunsaturated (PUFA) fatty acids predominated.In all studied species, linoleic, oleic, α-linolenic (UNSAT) and palmitic acids (SAT) were found in the greatest quantities.From the point of view of nutrition physiology, the most important for a correct metabolism in the animal organism are PUFA.From this group of acids, linoleic and α-linolenic acids were identified in the analyzed fruits.Both acids occur in significant amounts in the fruits of all analyzed taxa.Linoleic acid occurs in all studied species and it is the predominant compound in the majority of them.This acid is present in the greatest quantities (above 60%) in C. flava (74.23%), C. pseudocyperus (72.61%), C. rostrata (67.54%), and C. leporina (64.87%) (Tables 1, 2).These species differ regarding the concentration of linoleic acid as compared with the other studied taxa.The concentration of this compound is also higher than unfortunately, animals are not able to synthethize them by themselves.Plant species which possess abundant amounts of this type of fatty acids include e.g.: Olea europaea L., Helianthus annuus L., Zea mays L., Oenothera biennis L., Arachis hypogaea L., Juglans regia L., Linum usitatissimum L. and Brassica napus L. The analysis of fatty acids in the representatives of the Carex genus is not frequent.The few existing studies carried out in the generative organs of sedge nutlets were performed on only about a dozen species (Earle and Jones, 1962;Jones and Earle, 1966;Barclay and Earle, 1974;Egorova, 1999;Ahmad and Ansari, 1987) and only the percent of oil content was reported.Additionally, in the case of fruits, most of the earlier methods of analyses were based on hot extraction, which have currently been replaced by cold extraction.The use of the latter method protects fatty acids against oxygenation.On the other hand, other works dealing with this topic referred to fatty acids occurring in leaf structures and not producing any oil fractions (e.g.Ayaz and Olgun, 2000). The main aim of the present work was the quantitative and qualitative analyses of the fatty acids occurring in the oil from fruits (nuts) of selected Carex species and the determination of selected species as a potential source of IUFA. Plant material The study was conducted on representatives of 13 species of sedges which are comparatively widespread and relatively common in the Central European Netherlands.Representatives of the following species were studied: from the Vignea subgenus: Egorova (1999). For each species, ripe utricles were collected from three different populations growing in the same types of phytocenoses.Plant material was sampled from natural sites from characteristic and most typical habitats and phytocenotic conditions for the given species (e.g.Janyszek et al., 2008;Janyszek and Jagodzinski, 2009).The specimens came from the territory of Poland.Plant material has been deposited in the Herbarium of Botany Department (POZNB) of Pozna> University of Life Sciences (Poland). The utricles were dried in natural conditions.After drying, the fruits were removed from the utricles.The nuts (fruits of the Carex), constituting the study material, were analyzed.Nuts were weighed, packed, kept in a cool dry room, and later ground into powder form with an electrical mill. FATTY ACIDS COMPOSITION OF FRUITS OF SELECTED CENTRAL EUROPEAN SEDGES, CAREX L. (CYPERACEAE) linoleic acid Ϫ 72.61%), but is also characterized by the lowest level of α-linolenic acid (0.51%).Such a situation also refers to other species.The dominating unsaturated fatty acids divide the analyzed group into two subgroups.The first one, with 4 species (C.flava, C. pseudocyperus, C. riparia and C. leporina), is a good source of linoleic acid and the second subgroup, including the remaining species, is a good source of α-linolenic acid.Also for oleic acid, C. vulpina can be regarded as the best raw material. A greater content of linoleic acid characterizes the species from the Carex subgenus, and α-linolenic and oleic acids from the Vignea subgenus. Our analyses have also shown the presence of saturated acids in the studied samples, where palmitic acid is the predominant one.The remaining saturated fatty acids, which were not found in all the studied species, occur in comparatively low concentrations (usually not exceeding 2%) (Tables 1, 2). in Olea europaea (about 12%) (Dubois et al., 2007) and it is comparable with the content observed in the Oenothera genus (Krzaczek et al., 1995).The α-linolenic acid, found in the fruits of all studied species, is a predominant component of the fatty acid fraction in C. elata (32.44%) and in C. otrubae (31.93%).The lowest amounts were observed in C. pseudocyperus (0.51%).For comparison purposes, in the fruits of Olea europaea, only 0.6% of this acid was found (Dubois et al., 2007) (i.e. about 50 times lower content than in C. elata and C. otrubae). Among the isolated and identified MUFA, oleic acid was the most abundant.It is found in all the studied taxa ranging from 11.5% in C. flava to 42.5% in C. vulpina.The presence of other unsaturated acids was found (oleopalmitic n-9, oleopalmitic n-7, octadecenic and eicosenoic), but these acids occur in significantly smaller amounts.In the studied species there was also a small amount of erucic acid, whose concentration is generally lower in species belonging to the Carex subgenus (Tables 1, 2). As commented, C. pseudocyperus has a high content of unsaturated fatty acids (particularly ANNA BOGUCKA-KOCKA AND MAGDALENA JANYSZEK (1996).According to these authors the tubers can contain up to 27% oil. A dendrogram was obtained after the analysis of similarity based on the profile of fatty acid composition.The obtained dendrogram (Fig. 1) shows a division of the Carex species into two groups: the first one included C. appropinquata and C. vulpina, distinguished by the highest level of palmitic acid, but having no close habitat connections; the second group, including the remaining species, can be additionally divided into two subgroups.The species belonging to the first of the subgroups is characterized by a higher level of linoleic acid and a lower level of α-linolenic acid than the taxa from second one.This subgroup includes only species from very moist or medium moist habitats.It refers particularly to the agglomeration of C. rostrata, C. pseudocyperus, C. flava and C. nigra, and the agglomeration of C. paniculata and C. diandra.In both cases, the listed species grow in very similar habitat conditions.A majority of the species in the described subgroup includes representatives from the Carex subgenus.The second of the determined subgroups, except for C. acuta, contains species growing in more dry habitats.In this subgroup, all taxa, except for C. acuta and C. elata belong to the Vignea subgenus. Based on our results, the fruits of the studied species of the genus Carex can be considered as plant raw material with a relatively high content of unsaturated fatty acids, both polyunsaturated such as linoleic and α-linolenic and monounsaturated such oleic acids.Comparing the obtained results to data referring to other species of plants widely applied in prophylaxis and therapy, we can infer that oil from sedge nutlets is a very good source of PUFA.concentration in sedges.The results of our study and the literature data show that the studied sedges can be divided into two distinct groups: one with a concentration of α-linolenic acid oscillating within the limits from ten-plus to several tens percent and another, varying from slightly more than 0 to almost 2.2 percent.The confirmed feature concerning the group of saturated acids is the domination of the palmitic acid and the concentrations of the remaining saturated acids, which were similar to those presented in the literature data.Most likely, one can believe that such a composition of fatty acids and their mutual proportions are characteristic of the representatives of the Carex genus. Similar studies on the fruits of Cyperus esculentus (Cyperaceae) have been carried out by Kapseu et. al (1997).The fruits of C. esculentus are characterized by definitely worse parameters of the fatty acid composition than the studied Carex species.In addition, in C. esculentus, a distinct domination of saturated fatty acids was found, particularly stearic acid. The percentages of fatty oil fractions in the fruits of studied sedges are varying.The lowest content was observed in C. pseudocyperus (3.73%) and the highest in C. leporina (46.52%) (Tables 1, 2).In the remaining species these values oscillated within two intervals.The first interval (10-15% oil content) agrees with the one reported by earlier authors (Earle and Jones, 1962;Jones and Earle, 1966;Morice, 1977;Ahmad and Ansari, 1987) on other sedge species or organs (Ayaz and Olgun, 2000).The importance of the species from the Cyperaceae family as a significant source of oil is confirmed by athe results of studies made on Cyperus esculentus tubers by Eteshola and Oraedu Carex paniculata L., C. appropinquata Schum., C. diandra Schrank.; C. vulpina L., C. otrubae Podp.; C. contigua Hoppe.From the Carex subgenus: C. leporina L., C. rostrata Stokes; C. pseudocyperus L.; C. flava L; C. acuta L., C. nigra Reich, C. elata All.Systematics and nomenclature of the species have been accorded to Table 1 Percentage contents of fatty acids obtained from fruits of Carex L. from the Carex subgenus Figure 1.Dendrogram of cluster analysis of fatty acid compositions of 13 species of Carex based on all identifying fatty acids.The clustering was made using Ward's method of agglomeration with Euclidean distance.
2017-09-07T05:03:57.751Z
2010-06-30T00:00:00.000
{ "year": 2010, "sha1": "f0292e33f314a932596fe6f7841182b9e7c8f7eb", "oa_license": "CCBY", "oa_url": "https://grasasyaceites.revistas.csic.es/index.php/grasasyaceites/article/download/825/834/834", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f0292e33f314a932596fe6f7841182b9e7c8f7eb", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Chemistry" ] }
219900374
pes2o/s2orc
v3-fos-license
Characteristics of hypoparathyroidism in Colombia: data from a single center in the city of Medellín ABSTRACT Objective Hypoparathyroidism is a rare condition, whose most common etiology is complications of neck surgery. The aim of the study was to identify the clinical and biochemical profile of the patients with diagnosis of hypoparathyroidism, including the frequency of symptoms, clinical signs, long-term complications and disease control. Additionally, the study sought to know what the medication profile was, and the doses required by the patients. Subjects and method A retrospective cohort study was conducted wherein all patients with ICD-10 codes associated with hypoparathyroidism between 2011 and 2018 at the Hospital Universitario San Vicente Fundación were included. We investigated the etiology of the disease; biochemical profile including lowest serum calcium, highest serum phosphorus, 25OHD levels, calciuria and calcium/phosphorus product; medication doses, disease control, and presence of complications, especially renal and neurologic complications were also evaluated. Results The cohort included 108 patients (99 women/9 men) with a mean age of 51.6 ± 15.6 years. The main etiology was postoperative (93.5%), the dose of elemental calcium received was relatively low (mean 1,164 mg/day), and in only 9.2% of cases more than 2,500 mg/day of elemental calcium was necessary. We were able to evaluate the follow-up in 89 patients, and found that only 57.3% met the criteria for controlled disease. Conclusion The clinical profile of patients with hypoparathyroidism in our cohort is similar to that described in other international studies, with predominantly postoperative etiology. With standard therapy, only adequate control is achieved in a little more than half of patients. Arch Endocrinol Metab. 2020;64(3):282-9 INTRODUCTION H ypoparathyroidism is characterized by low serum concentrations of parathyroid hormone (PTH), which results in hypocalcemia and hyperphosphatemia (1,2).In the United States, it is estimated that the prevalence of this condition is nearly 37 cases per 100,000 inhabitants, of which 8 cases per 100,000 have non-surgical etiology and 29 cases per 100,000 Hypoparathyroidism in Colombia Arch Endocrinol Metab.2020;64/3 are postoperative (3,4).In European countries, a prevalence of similar proportions to that in the United States is described (5,6).The primary etiology is accidental surgical resection of the parathyroid glands during procedures such as thyroidectomies, but less frequent causes are also described, such as autoimmune diseases, infiltrative diseases, and neck radiation, among others (7).Clinical manifestations can be acute, such as neuromuscular irritability, weakness, seizures, and laryngospasm, or they can be chronic, affecting a large number of systems.The latter are explained by the persistence of an inadequate balance between serum calcium and phosphorus over time, which allows for the deposit of calcium in different tissues (calcification of the basal ganglia, nephrocalcinosis, etc.) (8).The main axis of management is the replacement of calcium and calcitriol, but in some selected cases hormonal replacement with PTH can be used (9)(10)(11)(12)(13)(14)(15)(16). Knowing the complete clinical profile of patients presenting with this disease, as well as having an estimate of the main clinical manifestations, physical findings, and complications, would allow us to have a broader view of the disease, which would ultimately result in earlier diagnosis, especially in groups that may be predisposed to developing the disease, such as patients who have undergone neck surgery.In addition, with the identification of clinical and laboratory variables, prediction models can be proposed for this disease, thus allowing for early diagnosis. In Colombia, to the best of our knowledge, characterization of patients with hypoparathyroidism and determination of the etiology of the disease have not yet been performed, and the clinical manifestations have also not been established along with how often each one occurs.In addition, data in Latin America is limited.The objective of this study was to determine the clinical and paraclinical characteristics of patients with hypoparathyroidism who were treated at a high complexity hospital in one of the country's main cities. SUBJECTS AND METHODS A retrospective cohort study was conducted at a single tertiary care hospital in Medellín, Colombia and approved by the Ethics Committee (Hospital Universitario San Vicente Fundación -HUSVF/ Universidad de Antioquia).Patients over 18 years of age who had low PTH concentrations (< 20 pg/mL) for more than 6 months and who had been evaluated at least once in the hospital for symptoms of hypocalcemia were included (including emergency consultations and outpatient care at the hospital).Patients under 18 years of age with pseudohypoparathyroidism and those who did not have a PTH report in their clinical history or laboratory tests were excluded. We recorded the demographic variables (sex, age, and race); clinical aspects (etiology, hospital/ outpatient management, previous diagnosis of hypoparathyroidism, time of evolution of symptoms, comorbidities, symptoms and signs associated with hypocalcemia, and chronic complications of the disease); biochemistry (lowest serum calcium, initial calcium, 24-h calciuria, phosphorus, 25OHD, albumin, and PTH); and associated disease management (elemental calcium dose, presentation of calcium used, calcitriol dose, thiazide diuretic use, PTH analog use, disease control, emergency consultations, and number of hospitalizations).This study was authorized by the HUSFV Ethics Committee. Definitions Hypoparathyroidism was defined as a low PTH concentration (<20 pg/mL) accompanied by symptoms of hypocalcemia or the requirement for calcium or calcitriol supplementation to avoid their onset.It was necessary for patients to meet these criteria for more than 6 months, as a shorter time indicates the presence of transient hypoparathyroidism.This specific population was not taken into account in the present study as its complication profile is different from those with permanent hypoparathyroidism (they have no risk of chronic complications of the disease).Hypocalcemia was defined as a serum calcium concentration lower than 8.4 mg/dL, corrected for albumin. Disease control was defined as minimal variation or stability in the result of serum calcium, phosphorus, and calciuria in at least two controls, at least 2 months apart.In cases where only the value of calcium was available, this parameter was used to define if the disease was controlled in patients who had undergone more than one assessment (9,11,17). Indications for the use of PTH were taken from present international recommendations that include: elemental calcium dose >2.5 g/day, calcitriol dose >1.5 ug/day, calcium/phosphorus product >55 mg 2 /dL 2 , hypercalciuria (>250 mg/24 h in women and >300 mg/24 h in men) and presence of renal complications (nephrocalcinosis and nephrolithiasis) (11). Data collection Authorization was obtained from the HSFV research department to access the database of patients treated between 2011 and 2017 who were associated with the following ICD-10 codes: D821 (DiGeorge syndrome), E200 (idiopathic hypoparathyroidism), E201 (pseudohypoparathyroidism), E208 (other types of hypoparathyroidism), E209 (unspecified hypoparathyroidism), E58X (dietary calcium deficiency), E835 (calcium metabolism disorder), E892 (hypoparathyroidism secondary to procedures).Data collection was performed between October 11, 2017 and November 9, 2018 by four researchers (JZL, SALT, DAH, and ECH) using a preset format with predefined variables, available on GoogleDocs ® .Patients with an equivocal diagnosis were reviewed by two authors (JZL, SALT) who reached a diagnostic consensus. Statistical analysis The statistical program SPSS Statistics 25 (IBM, Chicago) was used to analyze the data.Continuous variables are presented as medians and interquartile ranges, or as means and standard deviation according to the distribution of a given variable with the Shapiro-Wilk test.Categorical variables are reported as frequencies and percentages.The difference between median time was compared using non-parametric test mainly the Chi-squared test.To compare frequencies between groups a Pearson Chi-squared test was done. RESULTS A total of 1,422 medical records were reviewed to which inclusion and exclusion criteria were applied.In the end, 108 patients were included in the study and 1,314 patients who did not meet inclusion criteria were excluded.The main reason of exclusion was the absence of serum PTH values (n = 368), followed by duplicate records (n = 192), children (n = 317), normal PTH (n = 103), hyperparathyroidism (n = 125), hypercalcemia (n = 191), transient hypoparathyroidism (n = 6), pseudohypoparathyroidism (n = 6), hypomagnesemia (n = 1), hungry bone syndrome (n = 3), and other causes (n = 2). In 86.1% (n = 93) of patients, there was a previous diagnosis of hypoparathyroidism when they were first evaluated in the hospital, while in 13.9% (n = 15), the diagnosis was made in the follow-ups conducted at the institution.A total of 91.7% (n = 99) were women, with a mean age of 51.6 years (SD 15.6), and the most frequently identified etiology was postoperative (93.5%), followed by idiopathic (4.6%), cause not described (0.9%), and post-radiation (0.9%).It is worth noting that in cases of postoperative hypoparathyroidism, patients who were operated on by general surgeons as opposed to specialists in head and neck surgery could not be distinguished because this data was not available in most histories.The main comorbidities identified were hypothyroidism (88.9%), thyroid carcinoma (51.9%), arterial hypertension (42.6%), and dyslipidemia (21.3%).The patients' baseline characteristics are described in Table 1.The information is presented as mean ± SD (range) for age, and percentages for the other variables. Hypoparathyroidism in Colombia Arch Endocrinol Metab.2020;64/3 and included paresthesia, musculoskeletal compromise, renal disease, neurological and psychiatric features.Those features are better showed in Table 2.There was an average of 0.7 emergency consultations, although the range was very wide (0 to 21). The median time of follow-up was 860 days (IQR 344-1712).72.6% were from Medellin City and 27.2 outside Medellin.(0.9% without data).All patients had social security.33.3% belonged to a subsidized system and 65.8% to a contributive system (0.9% without data) There was no difference in the follow up time in patients with controlled disease or controlled disease (Controlled: median time 1157 95% IC 1033-1549 vs. Uncontrolled: median 856 95 IOC 856-1396 p = 0.756).There was no difference also regarding city (p = 0.661) or social security (p = 0.40). Two patients (1.9%) who were receiving rhPTH were documented, which in this case was teriparatide.Both patients were women of Hispanic ancestry with postoperative hypoparathyroidism.One of the patients was 47 years old, had a serum PTH of 3.5 ng/mL, serum calcium of 4.77 mg/dL, serum phosphorus of 6.8 mg/dL (calcium/phosphorus product of 32.4 mg 2 / dL 2 ) and received 1.0 µg/day of calcitriol and 3,150 mg/day of elemental calcium.The other patient was 27 years old, had a serum PTH of 6.2 ng/mL, serum calcium of 4.8 mg/dL, serum phosphorus of 7.0 mg/ dL (calcium/phosphorus product of 30.7 mg 2 /dL 2 ) and received 2.0 µg/day of calcitriol and 2,880 mg/ day of elemental calcium.There was no data for 24hour calciuria neither for neurologic complications of these two patients. There were 23 patients (21%) who met the criteria for rhPTH use.In the group of uncontrolled patients (n = 44), 16.7% (n = 15) met some of the criteria, while in the group of controlled patients (n = 45), 3.12% (n = 3) had an indication for this treatment.The In the total cohort, it was found that the presentation of the calcium used, was calcium carbonate in 78.1% of cases and calcium citrate in the remaining 21.9%.Likewise, a similar proportion was maintained when the groups of controlled (77.8% carbonate, 22.2% citrate) and uncontrolled (75% carbonate, 25% citrate) patients were evaluated.The average dose of elemental calcium in the last control was 1,164 mg/day (p25-p75, 480-1,440); calcitriol was 0.7 ug/day (p25-p75, 0.5-1.0), and 28% of the population were receiving thiazide diuretics (n = 30).However, 9.2% (n = 10) required more than 2,500 mg/day of elemental calcium and 2.7% (n = 7) received more 1.5 ug/day of calcitriol.It was possible to assess disease control in 89 patients (82%) who had multiple assessments over time, of which 45 of them (57.3%) had control of the disease. When comparing the group of controlled patients against uncontrolled patients, the PTH level in the group of controlled patients was higher compared to the group of uncontrolled patients (9.2 vs. 5.1), requiring lower doses of elemental calcium and calcitriol.However, these differences were no statistically significant.Similarly, higher values of calcium were observed with values of phosphorus, and lower calcium/phosphorus product in the group of controlled patients versus the uncontrolled group (Table 3).The information is presented as: median (p25-p75) for the evaluated variables.*Initial calcium value.PTH pg/mL, calcium onset mg/dL, phosphorus, mg/dL, 25OHD ng/mL, calcium/ phosphorus product mg 2 /dL 2 , dose of elemental calcium mg, dose of calcitriol ug.α: p < 0.01.There were no significative differences in the other variables. Hypoparathyroidism in Colombia Arch Endocrinol Metab.2020;64/3 remaining 5 patients met the criteria for initiation of rhPTH, but there was no follow-up to establish whether or not there was control of the disease.Data from these patients is showed in Table 4. that involve this gland (18).The main reason for performing a thyroidectomy or hemithyroidectomy was thyroid carcinoma, according to what is described in the literature (3).As for the clinical manifestations of the disease, the most frequently described symptoms were paresthesia, followed by musculoskeletal involvement which occurred in about a third of patients, similar to what has been documented in the literature, where these manifestations can occur in between 24% and 54% of patients, depending on the region (25,26).Very low frequencies of Trousseau's (13%) and Chvostek's signs (6.5%) were found; however, an association analysis between its existence and serum calcium levels was not made, therefore, we cannot know for sure what its production is to predict the presence of hypocalcemia (8,(27)(28)(29)(30).Depression was the most frequently documented psychiatric manifestation in our study, consistent with the described increased risk of suffering from psychiatric disorders in patients with hypoparathyroidism.The study by Underbjerg and cols.( 30) reported an HR of 1.99 (95% CI 1.14-3.46)for depression and bipolar disorder, which has also been documented in other observational studies (28,31).Regarding complications, it was found that about half of patients in whom calcium was measured in a 24-h urine test met the definition of hypercalciuria (n = 9), of which 33% (n = 3) had renal disease imaging where associated complications (nephrocalcinosis and nephrolithiasis) were fully documented.Although these results lack reliability as it was a small proportion of the study population who had a calciuria measurement in 24-h urine testing and imaging to evaluate renal complications, it is suggested that every patient with hypercalciuria in the follow-up should have renal imaging done.Of the patients, 11.1% had imaging of the central nervous system, where basal ganglia calcifications were found in 33.3%, strikingly in a much smaller proportion than is usually described in different observational studies, in which it ranges from 52%-74% of cases (17,27,28,32,33).Approximately half of the patients had adequate disease control and to achieve this, and this group required a lower dose of elemental calcium and calcitriol when compared to the group of uncontrolled patients.Similarly, it was documented that 23 patients (21%) of the cohort had an indication for rhPTH treatment, of which the vast majority were patients who were in the uncontrolled group.The low doses of elemental calcium and calcitriol that these patients were receiving *Some patients met more than one of the criteria: all three patients who met the criteria for dose of calcitriol also met the criteria for dose of elemental calcium; one of the patients who met the criteria for Ca/P product also met the criteria for dose of elemental calcium; in the hypercalciuria group, one met the criteria for elemental calcium dose, two met the nephrocalcinosis criteria and one the nephrolithiasis criteria; one patient in the nephrolithiasis group also met the nephrocalcinosis criteria.**Hypercalciuria is defined as >250 mg/24h in women and >300 mg/24h in men. DISCUSSION A cohort of 108 patients with hypoparathyroidism was evaluated in a high complexity hospital in the city of Medellín.To the best of our knowledge, this is the only study of its kind so far in the Colombian territory.The results obtained confirm the main etiology as postoperative (93.5%), with other causes being infrequent, which is in accordance with studies conducted in American and European populations.However, the percentage of cases of postoperative etiology was much higher than that described in the literature, which ranges between 62 and 78% (3,4,8,(18)(19)(20).These results can be explained by the measures used to prevent postoperative hypoparathyroidism; of which, the most widely supported in the literature (21)(22)(23)(24) are preservation of the parathyroid glands through meticulous dissection, preservation of their blood flow, and even autotransplantation or autologous transplantation of the gland, procedures which must be implemented by an expert surgeon in the area, a condition of this procedure that is not applied at all hospital centers in Colombia.The most frequent comorbidities were hypothyroidism and thyroid carcinoma, a completely expected result, provided that the primary reason for cervical surgery is thyroid gland surgery and it is described in the literature that up to 90% of cases of postoperative hypoparathyroidism are due to surgeries Hypoparathyroidism in Colombia Arch Endocrinol Metab.2020;64/3 in the study are striking considering the doses that are usually described for these patients (1). Different studies have been conducted for evaluating the use of rhPTH 1-84 (Natpara ® ) (14,34,35), which have shown its usefulness in the management of patients with hypoparathyroidism -promptly demonstrating the need for lower doses of calcium and calcitriol, improvement in bone health and also in quality of life (12,14).The results regarding the impact on hypercalciuria have not been clear, only in a study where an infusion pump protocol was used was a 60%-70% reduction in calciuria demonstrated.Regarding the use of PTH 1-34 (teriparatide), although this drug is not approved for the treatment of patients with hypoparathyroidism, there has also been an impact on the daily dose of calcium and calcitriol, as well as improvement in quality of life and bone health.For all the above reasons, the use of PTH, mainly rhPTH 1-84 (Natpara ® ), may be implemented as part of the treatment in those patients who persist with poor control of their disease or develop long-term complications, as stated in the international management guidelines (11).However, rhPTH 1-84 is not available in our region and teriparatide is only approved for use in patients with osteoporosis, which limits the use of this replacement in daily clinical practice. Limitations This is a retrospective study with the limitations inherent to the loss of data and collection based on data written by the clinicians.there is no way to know if the study subjects are representative of the original population, in addition, patients were identified via ICD-10 codes, the sample may thus not represent the totality of patients with hypoparathyroidism treated at the hospital during the period of time studied.The main reason for noninclusion was the absence of PTH levels in the medical records despite the fact that many of these patients already had a previous diagnosis of hypoparathyroidism and were receiving calcium, which forced a high number of patients with hypoparathyroidism to be excluded. Due to the retrospective nature of the study, some important aspects of the population could not be established: no data was found on what type of specialist performed the intervention in cases of postoperative hypoparathyroidism (general surgery vs. head and neck surgery) and it was not possible to establish the duration of the disease before diagnosis.There were other laboratory parameters such as calciuria, 25OHD levels, and diagnostic imaging, among others that were not performed in all study patients or were not recorded in the medical records.Only a small proportion of them had all the parameters established for adequate follow-up of patients with hypoparathyroidism, which limits the interpretation of these results.It should be mentioned that the calcium recorded during the study corresponds to the first measurement of calcium recorded in the clinical history and a compilation of all serum calcium values of each patient was not made, however, said parameter was taken into account when defining disease control.The measurement of serum magnesium was also not taken into account. It was not possible to establish the time of progress during which the patients maintained control of the disease, this is explained by the retrospective nature of the study and the inability to follow-up with patients who did not have multiple assessments during the last year. In conclusion, to the best of our knowledge, this is the first study on the clinical profile of Colombian patients with hypoparathyroidism that shows that a higher percentage of cases are secondary to surgical interventions, when compared with other centers in the world.With regard to medical management, the doses of calcium and calcitriol used were relatively low, and with this it is possible to achieve disease control in little more than half of cases; on the other hand, patients who did not achieve adequate control received a seemingly higher dose of the medications, which can be explained by the individual variation of the disease. Only a minority of patients received PTH therapy as an adjuvant in the pharmacological management of the disease, despite the fact that a fifth of the cohort met the criteria for its use.It is possible that the low use of this medicine explains, or at least partially explains, the low level of control that was achieved in the total cohort and that more widespread use of this therapy can considerably improve the proportion of patients who achieve the criteria for the control of disease, without increasing the risk of chronic complications associated with hypoparathyroidism. Funding: Financial support was provided by the Asociación Colombiana de Endocrinología, Diabetes y Metabolismo.Disclosure: Alejandro Roman-González: Speaker fees, advisory board honoraria or travel expenses from: Amgen, ACOMM, Table 2 . Clinical manifestations *The most frequently reported psychiatric condition was depression in 7.4% of the whole cohort.**Only 18 (16.6%)patients had imaging of the central nervous system.***Only 10.1% (n = 11) patients had renal studies. Table 3 . Biochemical results and doses of calcium and calcitriol discriminated by disease control Table 4 . Rate of subjects that meet each criterion for rhPTH
2020-06-18T09:06:04.415Z
2020-05-01T00:00:00.000
{ "year": 2020, "sha1": "3a56b2e78be32c476cdda0b5a01f7f4fab32febc", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/pdf/aem/v64n3/2359-4292-aem-64-03-0282.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "eb3cefd7cf5a6c3a601e9d911fa8abd61053ead6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
267374275
pes2o/s2orc
v3-fos-license
PREDICTIVE HEALTHCARE ANALYSIS OF PAKISTAN’S COVID-19 PANDEMIC USING DATA MINING AND TIME SERIES MODELLING : The novel coronavirus known as COVID-19 has become widespread throughout the world and presented new problems to the scientific community. This resulted in severe measures being implemented in several impacted countries, including total lockdowns, trade and business closures, and travel restrictions, all of which had a major negative economic impact. Pakistan has also had five coronavirus waves. Thus, government officials, legislators, business associates, and entrepreneurs will place a high value on knowing and anticipating how a nation might stop the spread of COVID-19. We use AI-based forecasting models, such as time series ARIMA, LSTM, FB Prophet, and VAR, to predict the spread of the COVID-19 pandemic. These techniques support the decisions made by legislators and public health authorities about the war against the epidemic. This paper demonstrates the promising potential of the time series model in forecasting COVID-19 cases and highlights the superior performance of the time series compared to the LSTM. I. INTRODUCTION A complex AI technique called data mining is used to extract novel, useful, and accurate hidden patterns or knowledge from datasets.Time series forecasting is a technique used to predict future values by analyzing past data collected over time.Recurrent Neural Networks (RNNs) and Long Short-Term Memory (LSTM) models are neural COVID-19 types that can effectively handle time series data.Machine learning (ML), a broader domain, encompasses various methods and algorithms, including RNNs and LSTMs, which are employed for forecasting purposes.Facebook created a unique open-source program called Prophet to anticipate time series data.It can handle missing data, seasonality, and other complications and employs a combination of linear and non-linear models to create Forecasting.A statistical model called VAR (Vector Auto Regression) is used to examine the dynamic connection between time series.The primary aim of research on COVID-19 forecasting utilizing various forecasting approaches is to foresee the virus's future spread and effects.This can help with emergency planning, resource allocation, and public health policy.To examine historical data and forecast future trends in cases, fatalities, and other pertinent metrics, techniques including time series analysis, recurrent neural networks (RNNs), long short-term memory (LSTM) networks, and vector autoregression (VAR) models can be utilized.To prevent the spread of the virus and lessen its effects on communities, 1 policymakers, and public health authorities can use these projections to help them make more knowledgeable decisions This is an open access article published by CCSIS, IoBM, Karachi Pakistan under CC BY 4.0 International License [3].To find trends and patterns in all the occurrences related to infectious diseases, the majority of the research in this paper makes use of AI forecasting models.This paper will also examine the usage of such methods that are solely concerned with predicting epidemiological variables like cumulative cases, fatalities, and recoveries from the current COVID-19 pandemic. II. RELATED WORK. COVID-19-Since January 2020, the number of incidents in the US has increased by 19.Even after the limitations were loosened, the number of instances increased despite social exclusion and lockdowns.Modeling the disease's spread can help governments and healthcare professionals plan ahead and get funding.For the healthcare system, precise shortterm case forecasts are essential.Simple Moving Average, Exponentially Weighted Moving Average, Holt-Winters Double Exponential Smoothing Additive, ARIMA, and SARIMA are just a few of the models that have been used since the pandemic's start.In this paper, ARIMA and SARIMA were selected for prediction, and the optimal model parameters for each were found using a grid search.According to the findings, ARIMA performs better than SARIMA in predicting, whereas the Holt-Winters Double Exponential model surpasses the Exponentially Weighted Moving Average and Simple Moving Average [4].This paper aimed to assess how well the ARIMA model predicted the spread of COVID-19, which the World Health Organization designated a global pandemic in March 2020 after infecting over 4 million people and killing over 300,000 by early May 2020.Although it is believed that ARIMA is not appropriate for complex and dynamic environments, the study evaluated Kuwait as a case study to assess its accuracy over a considerable amount of time.The actual statistics were mainly within the ranges predicted by the selected ARIMA model at a 95% confidence level, despite the disease's unpredictability and changes made by the Kuwaiti government.With a Pearson correlation coefficient of 0.996, the predicted values and observed data showed a significant correlation.This indicates that the predictions made by the ARIMA model are appropriate and sufficiently accurate [5].A Study was presented to detect COVID-19 misinformation in Swahili language in tweets.A machine learning model was used to carry out this study with highest accuracy was achieved using SVM i.e. 83.67% [6].A comparison was presented in [7] which addressed the COVID-19 issues using Machine learning, deep learning, and Artificial intelligence techniques.Regarding the COVID-19 pandemic, this work discovered contemporary and up-to-date information to fight against the COVID-19 pandemic using ML, DL, and AI techniques.A survey was conducted in [8] which explores the several deep learning applications in natural language processing for COVID-19.It also presented some limitations i.e.Interpretability, Learning from Limited Labeled Data, Generalization Metrics, and Data Privacy.The study suggested that in the spread forecasting of epidemiology, deep learning has additionally been utilized.Based on the comparative study of multiple research papers, the comprehensive study in [9], explores the numerous data mining algorithms that are used in combination with epidemiological prediction models.The specific risk of COVID-19 using LSTM-based ANN guided by Bayesian optimization was presented in [10].A study [11] proposed an ML algorithm that accurately predict the mortality risk of COVID-19 patients.Millions of people are affected by the most current coronavirus outbreak (COVID-19), which has expanded widely and caused severe diseases.Modeling was done using data acquired between January 30 and April 26, 2020, while forecasting was done using data received between April 27 and May 11, 2020.The spatial distribution of illness 29 risk was examined on a GIS platform using weighted overlay analysis.[12] The paper analyses the COVID-19 situation in Pakistan, which is presently dealing with the virus' fourth wave.The paper examines COVID-19 data for the nation using epidemiological models.The article evaluates both Bayesian and time-series SIR (tSIR) techniques while considering the fundamental susceptible (SIR) model infected (SIR) model, and recovered (SIR) model.Due to the government's successful strategy, the paper also discovered that the global assumption of a 14-day incubation time is inappropriate for Pakistan's data and that COVID-19 was not a pandemic.According to the study, the posteriorbased SIR (pSIR) model with a 34-uniform prior for R0 and Poisson distribution yields superior outcomes.The reporting rate (ρ) is less than 1, indicating underreporting of instances, according to the time-series SIR (tSIR) study [13].To diagnose and detect the COVID-19 pandemic, the overview of state of art applications and algorithms was presented in [14].The authors have proposed a Fake News Encoder Classifier (FNEC) [15] for online published news related to COVID-19 vaccines.It uses an ELECTRA model to classify news articles into real or fake and creates a new dataset called COVAX-Reality for evaluation.Estimation of the strengths of negative and positive sentiments was carried out in [16].The text analytics of Twitter data using tweets, retweets, and hashtags were used to detect the strength. III. METHODOLOGY This research utilizes Facebook Prophet, VAR, ARIMA, and LSTM models to predict upcoming COVID-19 cases and deaths.Models are created for each approach using the COVID-19 dataset, and their performance is assessed by comparing predictions via graphs and performance rates on a country-specific level. A. Design Experiment Before conducting experiments, several processes are completed, such as model building, data exploration, and data preparation.Data transformation includes data aggregation, extrapolation, creation of dummies, data time transformation, and variable reduction.Data combining entails merging or combining datasets.The novelty in the next section lies in the comprehensive mathematical explanations of both Facebook Prophet and LSTM, making it a valuable resource for readers seeking a deep understanding of these forecasting techniques.Additionally, it provides practical implementation guidance for Facebook Prophet, enhancing its usability in real-world scenarios. B. Exploratory Data Analysis Exploratory data analysis (EDA), inspects data to identify its unique characteristics.As part of EDA, data summaries and visualizations are carried out, and interesting data points are considered.Before any model is fitted to the data, exploratory data analysis should always be performed.The flow diagram of the EDA of our research work is shown in Figure 1. D. Calculation Of Quantitative Variables The quantitative numerical features that have a significant role in data exploration and analysis of the pattern of confirmed cases and deaths in time series are statistically represented.The plot shows the skewness vs. density of calculated numerical features according to the presence of outliers and the distribution for the study of the time series dataset.The following variables are calculated.These metrics are essential in the analysis of COVID-19 because they provide important insights into the current state and future trajectory of the pandemic. E. Incremental Cases And Deaths Incremental cases in the context of COVID-19 refer to the number of new cases reported in a given period.This value can be calculated by subtracting the number of cases reported in the previous period from those registered in the current period.For example, if there were 100 cases reported on Monday and 120 cases reported on Tuesday, then the incremental cases for Tuesday would be 20 (120 -100).This value represents the increase in the number of cases from one day to the next.It is used to forecast future trends in the spread of the disease.In Figure 3, we can observe the Incremental cases in Pakistan throughout the pandemic. Figure 3: Frequency Plot of C19 Incremental cases per day Incremental deaths of COVID-19 refer to the number of new casualties caused by the COVID-19 virus that have occurred in a given period.This value can be calculated by subtracting the number of COVID-19 deaths reported in the previous time period from the number of COVID-19 deaths reported in the current time period.This value represents the increase in the number of deaths caused by the COVID-19 virus from one day to the next.It is used to forecast future trends in the spread of the disease. Figure 4: Frequency Plot of Incremental C19 deaths per day In Figure 4 F. Case Fatality Rate The number of COVID-19-related deaths divided by the total number of confirmed cases yields the Case Fatality Rate (CFR), which indicates the severity of the illness.The CFR is expressed as a percentage and indicates the risk of death for someone contracting the disease.A lower CFR means that the disease is less severe, while a higher CFR implies that the disease is more severe.It is important to note that the CFR can be influenced by many factors, such as the accuracy of case reporting, the timing and quality of medical intervention, and the age and underlying health conditions of those infected.In the context of COVID-19, the CFR has significantly varied depending on the location and the stage of the pandemic.In Figure 5 and Figure 6, we can observe the Case Fatality Rate in Pakistan throughout the pandemic. Figure 6: Distribution Plot of Case Fatality Rate per day in Pakistan The number of COVID-19 deaths in Pakistan divided by the total number of confirmed cases is known as the COVID-19 Case Fatality Rate, or CFR.Pakistan's CFR is estimated to be 1.5% as of January 2023.This indicates that 1.5% of the population in the nation dies from COVID-19 for every 100 confirmed cases of the virus. G. Day-To-Day Relative Change In Cases And Deaths Day-to-day relative change in COVID-19 cases refers to the percentage increase or decrease in the number of confirmed COVID-19 cases from one day to the next.It is calculated as the difference between the number of cases on two consecutive days divided by the number of cases on the previous day. Figure 8: Frequency Plot of the day-to-day relative change in deaths in Pakistan A0 higher number of cases is indicated by a positive value, whereas a lower number of cases is indicated by a negative value.This metric provides insight into the trend of the spread of the disease Day-to-day relative change in COVID-19 deaths refers to the percentage increase or decrease in the number of COVID-19 deaths from one day to the next.A positive value in Figures 7 and 8 denotes an increase in deaths, whereas a negative value denotes a decrease in deaths. H, Comparison of cases vs. Deaths curve The comparison between the COVID-19 cases and deaths curve refers to the graphical representation of the number of confirmed COVID-19 cases and the number of COVID-19-related deaths over time.This comparison provides a visual representation of the spread of the disease and the impact of the disease on human health, as shown in Figure 9. I. Correlation Analysis Correlation analysis is a statistical method for determining whether and how strongly two variables or datasets are related.In market research, correlation analysis is a tool used to look for any significant patterns, trends, or relationships between quantitative data collected through methods such as surveys and polls.Finding trends in datasets is the main application of correlation analysis.Spearman's correlation is used when the linear relationship between two continuous variables is unknown, and the Pearson correlation coefficient evaluates the linear association between the two variables. J. Correlation between C19 Cases and Deaths in Pakistan and the US The mathematics of Spearman's correlation (also called Spearman's rank-order correlation coefficient) can be calculated as follows: Convert the data into ranks: Convert the original data into rankings, with the lowest value getting a rank of 1, and so on.The values of each feature's correlation coefficient show a unit change in the dependent component correlated with a unit change in the independent feature.There is a high correlation or strong Forecasting power between the estimate and the response features, and the estimate or coefficient is not zero.Table 1 presents the performance of the ARIMA and VAR, which is performing better than FB Prophet and LSTM, but ARIMA, as well as VAR for Cases and in death forecasting VAR performed well.The novelty lies in its in-depth analysis and comparison of different time series forecasting models applied to COVID-19 data.It provides insights into the strengths and weaknesses of each model, helping readers make informed decisions when choosing a forecasting approach for similar datasets and scenarios. V. CONCLUSION In this paper, we examined COVID-19 data samples, specifically focusing on two variables: confirmed cases and deaths.Our analysis involved preprocessing and exploratory data analysis techniques.We calculated and visualized multiple metrics, including incremental cases and deaths, Case Fatality Rate, daily and 7-day changes, and week-to-week differences.Our objective was to identify the most active time and days for COVID-19 cases, as well as analyze the relationship between cases and deaths through the cases vs. death curve for COVID-19 in Pakistan.These metrics are essential in the analysis of COVID-19 because they provide important insights into the current state and future trajectory of the pandemic.We also plotted comparison graphs of deaths vs. cases compared to Pakistan cases to another country to observe the effects.It is shown that the data analysis process we applied to analyze COVID-19 cases and deaths greatly influences understanding the COVID-19 effects concerning this dataset and is most crucial to add to results.In terms of performance, in the COVID-19 Cases of Pakistan dataset, ARIMA outperforms Deep Learning and other Time series models.It is lower in MAE and MSE, indicating that a model is better and has a lower error rate than other models when the error is smaller.It is -0.08478, which is also better in terms of variance/Root Square Score, meaning that the correlation increases with model quality.The ARIMA outperforms the other models because it performs better in all two metrics.The correlation between the actual and projected variables is quite good with the best-fitted model, and the COVID-19 Cases of the Pakistan dataset using the ARIMA Model performed significantly connected to a better model.Given that it performed better in each of the two metrics, VAR surpasses the other models in forecasting deaths.Forecasting algorithms can be valuable tools in the fight against COVID-19 in Pakistan, as they can help provide projections of the future spread of the virus and inform public health decision-making.In conclusion, forecasting algorithms can be helpful in the fight against COVID-19 in Pakistan.Still, it is vital to use them in conjunction with other data and information and to continuously update and refine the models based on new data and information as they become available.In summary, the novelty of this research lies in its synthesis of the study's key findings, the exploration of analysis metrics and temporal factors in COVID-19 data, the evaluation of forecasting models, and the discussion of practical implications and future research possibilities in the fight against Covid-19 and other diseases. Figure 5 : Figure 5: Frequency Plot of Case Fatality Rate per day in Pakistan Figure 10 : Figure 10: Pearson Correlation of Pak vs. USA C19 Data
2024-02-02T16:05:16.061Z
2024-01-30T00:00:00.000
{ "year": 2024, "sha1": "a9147ac4b932cd5a6181782feacf1333c867e598", "oa_license": "CCBY", "oa_url": "https://jmsnew.iobmresearch.com/index.php/pjets/article/download/1029/634", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "7939fe78e15a91981d84a06d88ed726364157e3a", "s2fieldsofstudy": [ "Medicine", "Computer Science", "Economics" ], "extfieldsofstudy": [] }
41164104
pes2o/s2orc
v3-fos-license
Sensory Gating Scales and Premonitory Urges in Tourette Syndrome Sensory and sensorimotor gating deficits characterize both Tourette syndrome (TS) and schizophrenia. Premonitory urges (PU) in TS can be assessed with the University of Sao Paulo Sensory Phenomena Scale (USP-SPS) and the Premonitory Urge for Tics Scale (PUTS). In 40 subjects (TS: n = 18; healthy comparison subjects [HCS]: n = 22), we examined the relationship between PU scores and measures of sensory gating using the USP-SPS, PUTS, Sensory Gating Inventory (SGI), and Structured Interview for Assessing Perceptual Anomalies (SIAPA), as well symptom severity scales. SGI, but not SIAPA, scores were elevated in TS subjects (p < 0.0003). In TS subjects, USP-SPS and PUTS scores correlated significantly with each other, but not with the SGI or SIAPA; neither PU nor sensory gating scales correlated significantly with symptom severity. TS subjects endorse difficulties in sensory gating and the SGI may be valuable for studying these clinical phenomena. INTRODUCTION Tourette syndrome (TS) is one of several brain disorders characterized by symptoms that suggest failures in the automatic -gating‖ of sensory stimuli. In TS, intrusive sensory information is often experienced as pressure or discomfort, at or below the skin level, or as a mental sensation [1]. These -sensory tics‖ are often followed by, and may trigger, motor and vocal tics that historically have defined this disorder. A variation of the -sensory tic‖ in TS takes the form of uncomfortable urges or mental states [2,3]. This sensory or mental discomfort offers TS patients an opportunity to identify an imminent motor or vocal tic, and to intervene using tools such as behavioral therapy. Thus, a more complete understanding of sensory phenomena may have direct clinical applications in TS and valid methods of quantifying these experiences in TS have become increasingly important for treatment development [4]. Two such scales are the Premonitory Urge for Tics Scale (PUTS [5]) and the University of Sao Paulo (USP) Sensory Phenomena Scale (USP-SPS [6,7]). The PUTS is a brief self-report of the frequency of specific pre-tic related sensory symptoms, while the USP-SPS assesses the frequency and severity of sensory phenomena that precede, accompany, or follow tics and other repetitive behaviors, such as compulsions or rituals. Impaired sensory gating also characterizes schizophrenia and has been quantified in scales, including the Structured Interview for Assessing Perceptual Anomalies (SIAPA and SIAPA-CV [8,9]) and the Sensory Gating Inventory (SGI [10]). The aim of this preliminary study was to determine whether these scales of sensory gating provide novel information regarding TS symptoms, which might inform us about the role of sensory gating deficits in the genesis of symptoms in TS. METHODS Methods were approved by the UCSD and SDSU Institutional Review Boards; the study was conducted at the UCSD Medical Center. Twenty TS and 22 age-matched healthy comparison subjects (HCS) passed phone screens. Exclusion criteria for all subjects included serious medical, neurologic, or psychiatric illness (other than TS, OCD, or ADHD); schizophrenia in a first-degree relative; loss of consciousness (>1 min); current substance abuse or dependence; pregnancy; or known hearing loss. HCS were also excluded for a history of mental illness or psychotropic medication use. Presence or absence of sensory phenomena was not used as a basis for study inclusion/exclusion. All screening questions were then repeated in person and all adults provided urine for toxicology; two adult TS subjects were excluded for positive toxicology. Medications (n) in TS subjects included: antidepressants (8), alpha-norepinephrine agonists (5), benzodiazepines (3), anticonvulsants (3), dopamine agonists, (2) dopamine partial agonist/antagonists (2), and stimulants (2). Participants underwent structured and semi-structured clinical interviews for three purposes: (1) global clinical diagnosis: Structured Clinical Interview for DSM-IV Axis I Disorders (SCID-I/NP [11]), Diagnostic Interview Schedule for Children, computerized version IV (C-DISC-IV [12]); (2) general symptom severity for TS and OCD: Yale Global Tic Severity Scale, adult and child versions (YGTSS and CYGTSS [13]), Yale-Brown Obsessive Compulsive Scale, adult and child versions (YBOCS and CYBOCS [14,15,16]); and (3) specific symptom severity related to sensory phenomena and premonitory urges: USP-SPS[6,7], PUTS [5], SGI [10], and the SIAPA (adult and child versions) [8,9]. Nine TS subjects carried diagnoses of OCD and two carried ADHD diagnoses. Scale scores were treated as continuous variables and group comparisons (TS vs. HCS) used mixeddesign ANOVAs; items best accounting for group identity were detected via stepwise discriminant function analysis. Relationships between scale scores or between scores and age were assessed by simple regression analyses. Alpha was 0.05. No significant relationships were detected between clinical scales and sensory gating measures, nor were scales significantly different between TS subjects with vs. without comorbid OCD. Because SGI subscales were differentially sensitive to diagnosis, separate regression analyses with each subscale were conducted and again failed to detect any significant correlations with clinical scales. DISCUSSION The major aim of this study was to determine whether scales of sensory gating provide novel information regarding TS symptoms. The elevation of SGI scores in TS is a new and robust finding. The SGI was designed to quantify sensory perceptual disturbances in individuals with -psychosis-spectrum‖ symptoms [10]. Here, TS patients and controls appeared to be most separated by subscales for perceptual modulation, distractibility, and overinclusion. These subscales were developed for use in a different clinical population and, therefore, may not be maximally sensitive for distinguishing TS from comparison groups. Additional analysis identified the most sensitive questions for distinguishing TS vs. HCS groups, related to perceptual modulation and distractibilityfeatures of disorders comorbid with TS such as OCD and ADHD, generally and within this sample. Therefore, the SGI may be detecting a broader 740 sensory gating deficiency and not one specific to TS alone. It is important to note that while the SGI has been validated for use in HCS [10], it has not been validated for use in minors; while the present findings demonstrated no significant simple effect of age on total SGI scores, interaction effects on SGI score were detected between age, gender, and diagnosis. Generally, SGI subscore variability for HCS minors was comparable to that seen in HCS adults (SD range 3.45-10.25 vs. 2.63-8.08, respectively); this was also true in TS minors vs. adults (SD range 6.70-12.46 vs. 3.78-13.21). Two scales were used to assess clinical symptoms of premonitory or sensory phenomena in TS: the USP-SPS and the PUTS. To our knowledge, no published reports directly compare these scales, and the finding that they are significantly and positively correlated in this sample provides some evidence of internal validity, i.e., that both scales are measuring similar, though not identical, phenomena. That scores in both scales generally tend to increase with age is also consistent with the clinical experience that premonitory events are most often reported among children older than age 10; the age-dependent reporting of premonitory events is thought to reflect the normal development of introspective capabilities, i.e., of bodily awareness, rather than an age-related change in the illness per se [5]. The scales used to assess sensory gating and perceptual anomalies appear to detect overlapping sets of information, which diverge as they relate to TS. In other words, while SGI and SIAPA scores were significantly correlated with each other, only SGI scores were elevated in TS. Furthermore, SGI scores were not significantly related to scores in scales of premonitory urges or motor/vocal tics in TS. Previous studies have also reported that SGI and SIAPA do not strongly predict other measures of either sensory or sensorimotor gating [17,18]. These findings suggest that the construct of -sensory gating‖ as assessed by the SGI is relevant to TS, and that sensory gating deficits in TS reflect processes that are not fully captured by existing scales for motor, vocal, or sensory tics. Presumably, this dissociation at a level of symptomatology reflects separable underlying neural and perhaps genetic substrates, and suggests that the SGI, or some TS-focused derivative of this inventory, might be a valuable phenotype to add to ongoing neuroimaging and genetic studies of TS. This study is limited by the small sample, containing a heterogeneous group of TS minors and adults, males and females, with a range of comorbid conditions and medications. Nonetheless, the robust findings suggest that the SGI might identify new information about TS that is not detected by existing scales for premonitory symptoms and, therefore, might have value in characterizing the disorder, i.e., via subtyping, assessing treatment response, or identifying different underlying etiologies or patterns of pathophysiology.
2018-04-03T06:19:39.269Z
2011-03-22T00:00:00.000
{ "year": 2011, "sha1": "68f748045a7cacb72619cf7b1f317f671271ba7a", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/tswj/2011/986538.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "70475635dd97161cde4f4bbecd74065a56f781cc", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
56480846
pes2o/s2orc
v3-fos-license
The Effects of Climate Seasonality on Behavior and Sleeping Site Choice in Sahamalaza Sportive Lemurs, Lepilemur sahamalaza Temperature, rainfall, and resource availability may vary greatly within a single year in primate habitats. Many primate species show behavioral and physiological adaptations to this environmental seasonality, including changes to their diets and activity. Sahamalaza sportive lemurs (Lepilemur sahamalaza) inhabit the northwest of Madagascar and have been studied only during the dry, colder period of the year. We investigated potential effects of climate seasonality on this species by collecting behavioral data between October 2015 and August 2016, encompassing both the warmer wet and the colder dry seasons. We collected 773.15 hours of behavioral data on 14 individual sportive lemurs to investigate year-round activity budgets, ranging behavior, and sleeping site locations. Additionally we recorded temperature and rainfall data at our study site to describe the environmental conditions during the study period. The study individuals significantly decreased their time spent traveling and increased their time spent resting in the dry season compared to the wet season. Although home range size and path lengths did not differ over the study period, sleeping locations were significantly different between seasons as the lemurs focused on more confined areas in colder periods. Overall, the results indicate that Sahamalaza sportive lemur behavior varies with season, in line with reports for other primates. Electronic supplementary material The online version of this article (10.1007/s10764-018-0059-1) contains supplementary material, which is available to authorized users. Introduction Most primate species inhabit tropical and subtropical regions (Myers et al. 2000;Wilson 1988). These regions are characterized by high precipitation and, to varying extent, seasonality. Seasonality is commonly defined as Bthe occurrence of certain obvious biotic and abiotic events, or groups of events, within a definite limited period, or periods, of the astronomic (solar or calendar) year^ (Lieth 1974, p. 5). The degree of seasonality depends on latitude, as habitats closer to the equator do not show pronounced fluctuations in abiotic factors, such as rainfall and temperature (Addo-Bediako et al. 2000;Stevens 1989; van Schaik and Pfannes 2005). In these regions, rainfall occurs year-round or in multiple shorter rainy seasons. These short periods merge into one longer rainy season as one moves away from the equator ( van Schaik and Pfannes 2005). Dry seasons, which are usually defined as the number of subsequent months in which rainfall is <100 mm/mo (Hemingway and Bynum 2005; van Schaik and Pfannes 2005), therefore increase in length with increasing latitude, resulting in more pronounced seasonal differences in climate ( van Schaik and Pfannes 2005). Similarly, year-round fluctuations in photoperiod increase with increasing latitude (Hill 2005). Recurring oscillations in rainfall, temperature, and photoperiod may affect biotic factors such as plant phenology (Lieth 1974). The resulting variation in food and water availability may influence and shape aspects of primate ecology and behavior by affecting year-round activity budgets, diet composition, and habitat use . Seasonality can further influence the timing of physiological processes such as reproduction (Brockman and Rasmussen 1985) and can affect primates indirectly via varying levels of predation pressure throughout the year (Gursky and Nekaris 2007;Irwin et al. 2009;Karpanty and Wright 2007;Mitani and Watts 2005;Rasmussen 2005). Correlations among temperature fluctuations, changes in day length, and food-rich (or scarce) periods make it difficult to identify a single underlying mechanism driving primate behavioral adaptations (Brockman and and limit our ability to predict the effects of climate change on primates. Generally, in times of low resource abundance, primates switch diets, increase their ranging, or attempt to save energy by reducing their overall activity (Ganzhorn et al. 2003). However, resource abundance can be coupled with abiotic variables such as temperature and rainfall (Tutin and Fernandez 1993; van Schaik et al. 1993), which themselves can affect primate behavior (Brockman and . Therefore, it is likely that primate ecology and behavior is shaped by an interplay of all these factors, as suggested for lemurs (Wright 1999). Madagascar lies in the most southern tropical latitudes and is characterized by varying degrees of seasonality (Richard et al. 2002;Wright 1999; van Schaik and Pfannes 2005): the central highlands as well as the western and northern regions are marked by long dry seasons, while the southern region is characterized by very little overall rainfall; high precipitation is common throughout the year in the eastern region of the island (Ganzhorn et al. 2001;Tattersall and Sussman 1975). Across Madagascar, lemurs have evolved multiple adaptations to conserve energy in environmentally stressful times (such as cold, dry seasons): most species show low basal metabolic rates and highly seasonal reproduction while torpor and hibernation are common among the smaller lemuroids (Kappeler and Ganzhorn 1993;Wright 1999). Thermoregulatory and energy-conserving behavior often occurs in areas with a prolonged dry season-a period in Madagascar marked by lower temperatures, little precipitation, and food and water scarcity (Sato et al. 2014;Wright 1999). Diurnal and cathemeral lemur species rest more, travel less, and increase sunbathing behavior during the dry season, particularly when ambient temperatures are low, e.g., in collared brown lemurs (Eulemur collaris: Campera et al. 2014;Donati et al. 2010), diademed sifakas (Propithecus diadema: Irwin 2014), brown lemurs (E. fulvus rufus: Sato 2012), ring-tailed lemurs (Lemur catta: Simmen et al. 2010), and ruffed lemurs (Varecia rubra: Vasey 2005; V. variegata variegata: Morland 1993). Decreased activity levels are not the only strategy lemurs use: brown lemurs (E. fulvus rufus) inhabiting the highly seasonal northwest of Madagascar show a combination of dietary and habitat changes in response to fruit scarcity during the dry season (Sato 2013;Sato et al. 2014). For small-bodied, nocturnal lemurs, seasonality in climate may impose even greater constraints as temperatures during the dry season drop significantly at night (Aujard et al. 1998). To cope with this environment, members of the family Cheirogaleidae often hibernate during cooler periods of the year (Blanco and Rahalinarivo 2010;Dausmann et al. 2004;Fietz and Dausmann 2006;Kobbe et al. 2014;Ortmann et al. 1997;Schmid 2000;Schülke and Ostner 2007). Nocturnal species that do not hibernate or show seasonal torpor, however, exhibit similar behavioral adaptations to diurnal lemurs: sportive lemurs of the family Lepilemuridae rest more and travel less during the dry, colder season (Dröscher and Kappeler 2014;Dröscher et al. 2016;Ganzhorn 1993;Hladik and Charles-Dominique 1974;Nash 1998). Their reproduction is highly seasonal, and births are timed around the end of the dry season (Hilgartner 2006). These small (600-1200 g; Mittermeier et al. 2010), arboreal primates are highly folivorous (Dröscher and Kappeler 2014;Hladik and Charles-Dominique 1974;Hladik et al. 1980;Seiler et al. 2014). Recent studies show that, although many sportive lemur species occur in deciduous forests where leaf availability fluctuates, dry seasons may not necessarily pose energy constraints in terms of food scarcity, but rather induce cold stress (Dröscher et al. 2016;Dröscher and Kappeler 2014;Ganzhorn 2002). As a behavioral response, sportive lemurs decrease their activity levels and ranging distance and increase resting times during the colder period Nash 1998). White-footed sportive lemurs (Lepilemur leucopus) even increase food intake and time spent feeding during the colder period, possibly to compensate for higher energetic demands due to colder temperatures (Dröscher and Kappeler 2014;Dröscher et al. 2016). As most sportive lemur species were described only in the past decade (Andriaholinirina et al. 2006;Craul et al. 2007), we have limited knowledge of their behavior and ecology in comparison to what is available for other primate taxa. This includes the Critically Endangered Sahamalaza sportive lemur, Lepilemur sahamalaza (the name was changed recently from L. sahamalazensis: Andriaholinirina et al. 2017), inhabiting the last remaining forests of the Sahamalaza Peninsula in northwest Madagascar. Previous studies that collected data exclusively during the colder, dry season indicate that Sahamalaza sportive lemurs are overall very inactive, with long resting bouts between short bursts of activity (Seiler et al. 2015) and that they adjust their behavior to rainfall, e.g., by switching between sleeping sites (Seiler et al. 2013). However, owing to the high seasonality in this region, with dry seasons lasting up to 6 mo each year (Volampeno et al. 2011), the results may not provide a complete understanding of Sahamalaza sportive lemur behavior. We conducted year-round nocturnal behavioral observations of 14 Sahamalaza sportive lemurs to record activity budgets. We hypothesized that the species' activity budget and sleeping site locations differ between the wet and dry season. If the seasonality of the habitat, measured as temperature and rainfall fluctuations throughout the year, is an environmental stressor, then we predicted that Sahamalaza sportive lemurs would show behavioral adaptations to this variable environment by adjusting activity budgets, ranging behavior, and sleeping site choice. Specifically, we predicted that Sahamalaza sportive lemurs would rest more and travel less in colder periods. We further predicted that home range size would decrease as a reflection of a decrease in general activity but did not predict that the home range location would move between the wet and the dry season. Finally, we investigated the effects of seasonality on sleeping site location, predicting a shift between months with rainfall and those without. Study Site We conducted the study in Ankarafa Forest in Sahamalaza-Iles Radama National Park, in northwestern Madagascar (Fig. 1). Ankarafa Forest is the most western forest patch in the protected area located between 13°52'S and 14°27'S as well as 45°38E and 14°46'E and is characterized by a mix of dry deciduous and Sambirano rainforest vegetation structures with a canopy up to 25 m in height, as is typical for Malagasy lowland forests (de Gouvenain and Silander 2003;Dumetz 1999;Grubb 2003;Volampeno et al. 2013). The forest consists mainly of regenerated forest with some old growth vegetation remaining (Seiler et al. 2013), and human activities and anthropogenically caused fires have influenced the vegetation structure: differently degraded patches occur throughout the forest. In addition, nearly a quarter of the forest is composed of exotic and invasive species, such as mango trees, Mangifera indica, and nonnative bamboo, Bambusa sp. (Schwitzer et al. 2007;Volampeno et al. 2013). We collected data for 10 mo between October 2015 and August 2016 covering the wet season (October-March) and the dry season (April-September). We further divided the seasons into early and late subseasons to improve the resolution of the data (Table I). We based this division on preliminary weather data (early subseasons incorporate the transition periods) and on Sahamalaza sportive lemur reproductive patterns (birth, premating, mating, and postmating) based on previous observations and preliminary data (Ruperti 2007;Seiler et al. 2015). Infants were born between late September and early November, while F. Ruperti observed mating in May and early June. During the study period we recorded daily minimum and maximum temperatures, measured with digital thermometers (TFA Dostmann, Wertheim, Germany), and measured daily rainfall using a simple rain gauge. Study Subjects We captured 14 Sahamalaza sportive lemurs between September and October 2015 in the study forest and fitted them with radio-collars (3.5 g, Biotrack, Wareham, UK). Four individuals vanished during the data collection period: we assumed three deaths were due to predation as we found carcasses or remains; we could not account for one disappearance. Three individuals vanished at the beginning of the early dry season and the fourth at the end of the study. Unless stated otherwise, we included data for all available individuals in the analyses. Annual Activity Budget and Home Ranges We conducted behavioral observations on the radio-collared individuals throughout the study. Two teams, consisting each of three observers, followed two individuals simultaneously 18:00 h-24:00 h; each team used a SIKA radiotracking receiver (Biotrack, Wareham, UK) and 3-element Yagi antenna (Biotrack Wareham, UK). In a pilot study, I. Mandl calculated activity budgets from 327.7 h of continuous behavioral observation on six individual Sahamalaza sportive lemurs between July and October 2013. Comparing the percentage of time spent on each individual behavior (see Table II) with paired t-tests for the first (18:00 h-24:00 h) and the second half of the night (0:00 h-06:00 h) revealed no statistically significant differences (resting: t (5) = 0.97, p = 0.39; feeding: t (5) = 0.9, P = 0.40; grooming: t (5) = −0.7, P = 0.52; locomotion: t (5) = 1.2, P = 0.21, and not visible: t (5) = 1.5, P = 0.19). We therefore collected behavioral data only during the first half of the night. The observers followed the focal individuals using headlamps and torches and dimmed these lights if the animals were close. We recorded behaviors continuously, as described in the ethogram (Table II), giving a detailed activity budget for each individual. We recorded GPS points with a handheld GPS (GPSMAP 60CSx, Garmin Ltd., Schaffhausen, Switzerland) at each tree the focal individual visited during observation. Although observations of behaviors were often difficult because of the dense canopy, the sportive lemurs did not show flight behaviors typical of other primates and occasionally approached the observers as close as 1 m to settle and feed. The observers aimed to avoid disturbance to natural behaviors where possible. Sleeping Sites We visited the study individuals at their sleeping sites three times a week during the study (Fig. 2). We located individuals at their sleeping sites and identified them with the help of their radio-collars. We recorded a GPS point for the sleeping site with a handheld GPS if we could locate it clearly. If animals were not visible, we ascertained their location using the signals of the radio-collars and triangulation. We recorded the visibility of the individual (visible/not visible) at each site. The time of day we recorded sleeping site locations varied throughout the study period, but we assume that this did not influence our data, as Sahamalaza sportive lemurs do not usually move or change sleeping sites during the daytime (Ruperti 2007;Seiler et al. 2013). If we could not identify the sleeping tree clearly, we did not take a GPS point. Analysis Temperature and Rainfall We calculated the mean minimum and maximum temperatures for each month and each subseason and compared them between subseasons using a Kruskal-Wallis analysis of variance. We calculated total rainfall for each month. Rainfall occurred mainly during the daytime but we terminated five nightly Behavior Description Resting Vigilant The lemur is stationary but alert, looking around and directing its gaze in various directions. Resting The lemur sits on a support, is not alert, and directs its gaze in one direction, eyes half-closed or closed. Feeding The lemur is handling food and eats, chewing visibly or audibly. If the lemur was partially or wholly out of view, we ascertained feeding behavior by the characteristic rustling and dropping of half-eaten food items, such as leaves. Grooming The lemur is licking or scratching its fur. Locomotion The lemur moved, by walking, climbing, or jumping, over a distance of >50 cm. If the lemur was partially or wholly out of view, we ascertained locomotion by movement of branches and leaves at the animals' location. Other Behaviors not described above, including social interactions, vocalizing, and infant care. Not visible The lemur is not clearly visible and we cannot observe its behavior. The Effects of Climate Seasonality on Behavior and Sleeping Site... observations in February early because of excessive rainfall. We excluded these from our analysis. We, therefore, did not examine the effects of rainfall on behavior. We considered rainfall only in analysis of path lengths for nights where rainfall was light enough to allow for full behavioral observations. Activity Budget, Home Range Size, and Path Length We collected 773.15 h of behavioral data between October 2015 and August 2016. We determined activity budgets using only the time the animals were clearly visible (Table III). We calculated the percentage of time an individual spent on each behavior for each subseason. To test whether season affected activity budgets we computed linear mixed models (LMMs) for each behavior, the response variables being the percentages of time engaged in the behavior, the fixed effects being subseason (early wet, late wet, early dry, late dry) and sex (male/female), and an interaction between the two. We set individual ID as a random effect to account for interindividual differences. We also controlled for differences in behavior due to similarities in habitat structure by including a random effect of forest area. We classified individuals whose home ranges overlapped to any degree as living in the same area, resulting in five forest areas). We added 0.5 to all values to eliminate analysis problems arising from 0 values and log-transformed the response variables to achieve normality. To compute home range sizes, we visualized all GPS points collected during behavioral observations using Quantum GIS (ver. 2.14.0, QGIS Development Team). We excluded eight points, as they were clearly not within the range of the respective individuals (e.g., lying up to 1 km outside the forest border), indicating measurement errors. We used the remaining GPS points to compute kernel density estimation (KDE) distributions: home ranges that illustrate a percentage likelihood of an animal residing in a given area based on the GPS relocations (Worton 1989). We calculated 50 and 99% KDEs for each individual using a least-squares cross validation (LSCV) bandwidth selector. We chose 99% KDEs rather than the commonly used 95% density estimation because LSCV may undersmooth estimations of small home ranges of <1 ha, giving very conservative results (Blundell et al. 2001;Seaman and Powell 1996;Steury et al. 2010). We did not follow a fixed time schedule but collected GPS points for every tree we saw the lemurs visit, resulting in varied temporal autocorrelation and different sample sizes. As the study individuals often rested in the same tree for multiple hours, a fixed sampling regime would have resulted in datasets consisting in large part of duplicates, introducing biases in utility distributions (Katajisto and Moilanen 2006). However, while introducing a fixed sampling regime to decrease biases in how home ranges are used is recommended (de Solla et al. 1999), calculating home range size via KDE does not necessarily require independent data points (Blundell et al. 2001;de Solla et al. 1999;Swihart and Slade 1997). We therefore focused on calculating home range size from the available data, rather than interpreting home range use, but acknowledge that the limited sample sizes may have decreased home range size estimates (see Electronic Supplementary Material [ESM] Fig. S1 for plots of cumulative home range size that illustrate a steep increase in size estimate with fewer nights of data collection). We compared mean home range size between the wet and the dry season using paired Students t-tests (de Winter 2013). We also calculated the percentage of overlap of the 50% core KDEs (the area an animal is most likely to be found in 50% of the time) in the wet and dry seasons to determine if the study individuals changed their centers of activity over time. We excluded three individuals from this analysis, as they disappeared at the beginning of the dry season and we did not have complete home range data for them. We could not compare the size of the home ranges for each subseason because of limitations of the datasets. We therefore focused on the daily path length to describe variation in ranging behavior. We calculated daily path length as the distance traveled in meters for each individual for each night of full behavioral observations (18:00 h-24:00 h) using the GPS points collected during those nights with the Points-to-Paths plugin in Quantum GIS (Hiatt 2015). We investigated the effect of temperature and subseason on path length over the year with a LMM, setting minimum temperatures measured on days for which path length data were available, sex of the individual, and subseason as fixed effects. As rainfall affects primate ranging behavior (e.g., Ganas and Robbins 2005), we included rainfall (nights with >5 mm rainfall vs. those with <5 mm rainfall) as a fixed effect. However, this analysis encompasses only nights in which rainfall was light enough to allow full behavioral observations. We also included individual ID and forest area as random effects. We considered only minimum temperatures in the analysis, as these were recorded at night during the active period of the sportive lemurs and may thus have directly influenced activity. Sleeping Sites We calculated the percentage of days each individual was visible at its sleeping sites for each subseason, using the total number of days we recorded the individuals at their sleeping sites. Although we could determine the exact sleeping tree for most days, the study individuals were often not visible, as they often hid in tree holes or foliage, making it impossible to quantify sleeping site types. We plotted the collected GPS points of sleeping sites onto each individual's home range for each season to compare 1) the location and 2) the spread of sites. We calculated the spread of sleeping sites by determining the distance of each sleeping site to the mean GPS location of sleeping sites, the centroid (= standard distance). The centroid is calculated as the mean latitude/ longitude values of sleeping sites for each subseason and individual. Then we investigated the effect of season on the log-transformed standard distances using an LMM, including individual ID and annual home range size as random factors to account for interindividual variation and differences in distance caused by home range size. We performed all statistical analysis using R (ver. 3.3.1, R Core Team) using the packages MASS (Venables and Ripley 2002), sp. (Pebesma and Bivand 2005), and raster (Hijmans 2016). We computed home range calculations using the package adehabitatHR (Calenge 2006) and produced all LMMs with the package lme4 (Bates et al. 2015). For all LMMs, we explored the data to ascertain that they met the assumptions of the models. We also reviewed diagnostic plots of residuals. Where we made multiple comparisons, we adjusted p-values using the Holm-Bonferroni method. We set the significance level to P = 0.05 and tests were two-tailed unless stated otherwise. Data Availability The datasets analysed during the current study are available from the corresponding author on reasonable request. Ethical Note A team of trained veterinarians captured the study individuals during the daytime at their sleeping sites using nets and anaesthetized each individual with 0.1 ml of Zoletil 100 by hand or using a Telinject blowpipe. The team weighed the subjects and took standard measurements. They equipped captured individuals with a microchip (8 mm × 1.4 mm ISO FDXB, Micro-ID, West Sussex, UK) subcutaneously for future identification in case of recapture. We fitted all individuals, eight females and six males, with cable-tie VHF radiocollars that did not exceed 0.7% of their body mass (3.5 g, Biotrack, Wareham, UK). We monitored the captured individuals for ≥6 h before releasing them at the capture site at the onset of their normal nocturnal activity period. We also visited all individuals daily and checked for signs of deteriorating health or problems with the collars. At the end of the study period, we recaptured all remaining individuals using the foregoing methods. All subjects had gained mass over the year and all collared females showed signs of pregnancy, indicating that the collars did not prevent mating and reproduction. The veterinarian team removed the remaining collars successfully and released the individuals back into the wild. The authors declare that they have no conflict of interest. Temperature and Rainfall Mean minimum and maximum temperatures and rainfall varied across the study period (Fig. 3). Rainfall was highest in January (612 mm) and February (>800 mm), dropped markedly in March (302 mm), and ceased in April (Fig. 3). Total rainfall over the data collection period was 2252 mm. Both minimum and maximum temperatures showed significant differences between the subseasons (Kruskal-Wallis test: T min : H = 146.21, df = 3, p < 0.01; T max : H = 4.753, df = 3, P < 0.01). Pairwise Mann-Whitney U tests revealed that maximum temperature was significantly lower in the late dry season than in the early dry (z = −1.91, P = 0.05) and the late wet season (z = −3.04, P = 0.002), despite a drop in maximum temperature during the wet season. Minimum temperature did not differ significantly between the early wet and the late wet seasons, but all other comparisons showed statistically significant differences (early wet-early dry: z = −4.91, P < 0.01; early wet-late dry: z = −7.67, P < 0.01; early dry-late dry: z = −7.01, P < 0.01) as the temperature decreased over time (Fig. 4). Seasonal Effects on Activity Budget The study subjects spent a mean of 50% of their active time Resting Vigilant in all subseasons. Time spent on all other behavioral categories varied across the year (Table IV). Similarly, subseason had a significant effect on the time spent Resting (LMM: χ 2 = 22.57, df = 3, P < 0.01), and study individuals spent significantly less time resting in the late wet season than during any other period (late wet-early wet: z = −2.8, P = 0.02; late wet-early dry: z = 2.7, P = 0.03; late wet-late dry: z = 4.1, P < 0.01) (Fig. 5). The interactions between sex and subseason showed no statistical significance in any recorded behavior Seasonal Effects on Sleeping Site Choice The visibility of the study individuals during the day varied between seasons (Table V). Sleeping sites were usually either liana tangles, cavities in dead and living trees, in palm leaves, or simply on branches. Some individuals used the same trees repeatedly year-round. Cavities in dead or living trees were used mainly during the colder months (I. Mandl pers. obs.). These cavities fulfilled a double function: the exposed entry hole enabled sunbathing, while the cavity was often deep enough for the individual to fully hide from predators. However, as we could not see individuals in their sleeping trees for 31.6% of the days we recorded them, we could not quantify the number of different sleeping site types. Over the study period, each study subject used multiple sleeping sites located in different areas of their home ranges (Fig. 7). The distance to the spatial centroid was affected by subseason (LMM: χ 2 = 99.43, df = 3, P < 0.01). Sites were further apart in the late wet (z = −9.7, P < 0.01) and early dry season (z = −6.7, P < 0.01) than in the late dry season (Tukeyadjusted, post hoc pairwise comparisons), when study individuals used sleeping sites in a limited area of their home range (Fig. 8). Discussion Temperature and rainfall in the study region varied considerably over the study period of 10 mo, with the late dry season showing the lowest temperatures. Rainfall occurred mainly between October and March. Sportive lemurs showed few sex differences in behavior and only locomotive and resting behaviors differed significantly between subseasons. Time spent feeding increased over the dry season, but this was not statistically significant. There were no measurable differences in the sizes of home ranges between the dry and the wet season, but rainfall significantly negatively affected ranging behavior. While we could not record the number and types of sleeping sites, the locations of sleeping trees varied across subseasons, with wider spread of sleeping sites in the late wet season than the rest of the year. Seasonal Effects on Activity Budget As we predicted, Sahamalaza sportive lemurs rested more and traveled less in the coldest period of the year. Across all seasons, Resting Vigilant made up nearly half of the activity budget, in line with previous studies that showed that this species remains stationary for prolonged periods at night (Seiler et al. 2014). However, time spent resting increased significantly in the coldest period. These findings mirror what previous studies suggested to be a strategy for energy conservation during times of low resource abundance (e.g., Eppley et al. 2016;Ganzhorn et al. 2004;Nowack et al. 2013). The lemurs fed on the leaves of >40 different species of plants in previous studies (Seiler et al. 2014). During the present study, they added fruit to their diet in the wet season, but leaves constituted most of the diet year-round (I. Mandl pers. obs.) as observed in other folivorous primate species such as mantled howler monkeys GPS points are the locations of sleeping sites that we could determine. We found individuals classed as Bnot visible^using their radio-collars but could not see them clearly van Schaik and Brockman 2005). More long-term studies on phenology and nutrition are needed to determine to what extent Sahamalaza sportive lemurs are subjected to resource fluctuations, but findings fail to support the idea that the coldest period signifies the environmentally most stressful time due to food limitations for red-tailed sportive lemurs (Lepilemur ruficaudatus: Ganzhorn 2002). White-footed sportive lemurs (L. leucopus) decreased activity levels in the coldest period of the year while leaves were still abundant and increased again with increasing ambient temperatures, when food availability reached a low point (Dröscher and Kappeler 2014). The authors suggest that minimum temperatures affected energy expenditure more than fluctuations in dietary resources. An increase in food intake, hypothesized to fuel heat production during the dry season, further points toward behavioral adaptations to cold temperatures (Dröscher et al. 2016). The sportive lemurs in our study increased the proportion of time spent feeding during the colder periods, but this effect was not statistically significant and data on resource availability are needed to conclude whether increased time spent feeding reflects a compensatory behavior for a diet of lesser quality (e.g., Hendershott et al. 2016) or a thermoregulatory strategy (e.g., Nowack et al. 2013). A further seasonal adaptation in feeding behavior is a change in nutrient composition of the diet: white-footed sportive lemurs increased nonprotein (e.g., fiber) intake during the dry season, possibly owing to variations in available foods (Dröscher . Diets high in fiber can affect heat production (Zhao and Wang 2007), further inducing thermoregulatory stress, but more research is needed to investigate the intertwined effects of high-fiber, folivorous diets and thermoregulation in primates. A change in proportion of resting to traveling has been suggested to be a thermoregulatory strategy in other small, nocturnal primate species (Knox and Wright 1989;Schmid and Kappeler 2005). Behavioral thermoregulation to counteract cold stress can include a decrease in active behavior, positional changes (e.g., Bhunched^posture while resting; Dagosto 1995) or social huddling during the day and night (Gilbert et al. 2010;Ostner 2002). During the present study, a female spent 3 h huddling with an unknown, but likely younger because smaller, individual during a particularly cold night, possibly to minimize heat loss, as in a wide range of endothermic animals (Gilbert et al. 2010;Kotze et al. 2008;Ostner 2002;Savagian and Fernandez-Duque 2017). Huddling effects physiological changes in mouse lemurs, Microcebus murinus, decreasing metabolic costs and thus conserving energy (Perret 1998). Sportive lemurs have low resting and basal metabolic rates, like other strepsirrhines (Bethge et al. 2017;Dorcas and Crompton 1998;Schmid and Ganzhorn 1996;Wright 1989Wright , 1999. The sportive lemurs' low metabolic rate and long resting bouts during the dry season may be in part due to physiological thermoregulation (Bethge et al. 2017;Kobbe et al. 2014;Kurland and Pearson 1986;Schülke and Ostner 2007;Sparrow and Newell 1998). However, more research is required to fully understand the effects of temperature fluctuations on sportive lemur physiology (see Bethge et al. 2017). The sexes did not differ in the times they allocated to resting and traveling behaviors across the year. Overall males groomed less than females but engaged more in activities of the category BOther^owing to their increased vocal activity compared to females (unpubl. data). These results indicate that males and females did not face different energetic demands despite the additional requirements of lactation and gestation in females, in accordance with what has been found for white-footed sportive lemurs (Dröscher et al. 2016), ring-tailed lemurs (Lemur catta), and brown lemurs (Eulemur sp.: Simmen et al. 2010). Seasonal Effects on Ranging Behavior We predicted a change in home range size between the warmer wet and colder dry seasons, reflecting decreased activity in the colder period of the year. The home range sizes recorded in this study were in line with what has been reported previously for this species (Seiler et al. 2015) but did not differ between the seasons. In addition, home range locations did not change considerably between the seasons, in contrast to those of primates that rely on fruiting trees year-round and change their ranges to incorporate available fruit (Garber 1993;Peres 1994;Wallace 2006). Home range size, and possibly location, differs between seasons if habitat requirements differ (e.g., Wiktander et al. 2001), which may not be the case for the folivorous Sahamalaza sportive lemur (Seiler et al. 2014). The lemurs traveled shorter distances in nights with rainfall, which occurred mainly between December and February. Rainfall can affect the insulation properties of fur, reducing its thermal resistance (Webb and King 1984). The Sahamalaza sportive lemurs in this study may have sought to avoid getting wet by remaining in shelter during rain bouts. Variation in path length as a response to rising humidity levels has also been reported in other primate species: in gorillas (Gorilla beringei) shorter path lengths are associated with increase in humidity and rainfall (Ganas and Robbins 2005), whereas Javan slow lorises (Nycticebus javanicus) travel more in a more humid environment (Reinhardt et al. 2016). The sex of the lemur did not affect path length, although the sexes have potentially different reproductive interests as males may roam to find receptive females (Lane et al. 2010). These findings reflect those for red-tailed sportive lemurs . Distance traveled also did not differ between the subseasons in this study: while individuals decreased the time spent in locomotion, they did not travel shorter distances. Shorter path lengths may reflect either a change in resource distribution (e.g., Hoffman and O'Riain 2011) or an adaptation to conserve energy in a climate with extreme temperatures (hot: e.g., Campos and Fedigan 2009;cold: Ganzhorn et al. 2004;Warren and Crompton 1997). Red-tailed sportive lemurs Although heat production is generally fueled by locomotion, traveling can be energetically costly (Terrien et al. 2011), especially for species with low metabolic rates, such as sportive lemurs. However, the energetic costs of locomotive behavior in this genus have not been studied in depth (Dorcas and Crompton 1998;Warren and Crompton 1997). It is possible that colder temperatures have an additional constraining effect on primate movement, as heat may be lost through exposure of less insulated body areas (Paterson 1983). In contrast, white-faced capuchins (Cebus capucinus) reduce path length with increasing temperatures (Campos and Fedigan 2009). These primates reduced their travel distances and centred their activities around remaining water sources in hotter temperatures (Campos and Fedigan 2009). While the sportive lemurs we studied may also have been influenced by variation in resource distribution, the unequal sample sizes and variances in the data set did not allow for further analysis of path length. More detailed information is needed to determine the variability of environmental resources for this species. Seasonal Effects on Sleeping Site Choice Based on previous results that the lemurs did not use tree holes after days with rainfall (Seiler et al. 2013), we predicted a difference in sleeping sites between subseasons. As the study subjects were hidden at their sleeping sites for at least a quarter of the days that GPS points were collected, we did not analyze the number and types of sleeping sites. A previous study indicated that microhabitats around sleeping sites are important to this species, suggesting only specific sleeping sites meet ecological needs (Seiler et al. 2013). The changes in sleeping site locations recorded in the present study may imply differing requirements or priorities (e.g., protection from rain) across the year. The lemurs used sleeping site locations that were spread over a greater area during the late wet season than the rest of the year. Lemurs may have sought more sheltered places during months with heavy rains, as suggested by Seiler et al. (2013): some of the tree cavities used during the dry season were very exposed and even collected water during the wet season (I. Mandl pers. obs.). This could explain the fact that the study individuals were least visible during sleeping site checks between December and February, the months with the highest rainfall, as they rested in well-covered day sleeping sites. However, this does not sufficiently explain why the study individuals slept in multiple locations spaced more widely apart, rather than staying in one or two well-sheltered sites. Suitable sleeping sites may represent a limited resource, as suggested for Milne Edwards' sportive lemurs, (Lepilemur edwardsi: Rasoloharijaona et al. 2003) and weasel sportive lemurs (L. mustelinus: Rasoloharijaona et al. 2010). The importance of high-quality sleeping sites that provide sufficient protection from predators and weather has been emphasized (Anderson 1998). Using sleeping sites far apart and changing them often can function in predator avoidance (Hrdy 1980;Smith et al. 2007): small-bodied primates such as tufted capuchins (Sapajus apella), moustached tamarins (Saguinus imperator), and Azara's owl monkeys (Aotus azarae) change sleeping sites frequently and avoid sleeping in the same location on consecutive nights, presumably to prevent predators from anticipating sleeping site locations (di Bitetti et al. 2000;Savagian and Fernandez-Duque 2017;Smith et al. 2007). We did not assess the predation pressure on sportive lemurs by raptors (e.g., Madagascar harrier hawk, Polyboroides radiatus), snakes (e.g., boas, Acrantophis madagascariensis), or carnivores (e.g., fossa, Cryptoprocta ferox) in the study area but predation pressure was higher during the dry season than the rest of the year in previous studies conducted in similar habitats (Gursky and Nekaris 2007;Rasmussen 2005;Schnoell and Fichtel 2011). Our study subjects slept in a limited area of their home range during the late dry season (July and August), using only one or two sleeping sites repeatedly. They often rested in very exposed spots, as recorded by the high visibility in this period. Sahamalaza sportive lemurs were easily found, often resting exposed, during the dry season in a previous study that showed that this species was highly vigilant and reactive toward calls of predators, regardless of sleeping site type (Seiler et al. 2013). Sleeping sites chosen in the dry season may provide increased protection from predators by enabling earlier detection in a more exposed resting place (Gursky and Nekaris 2007). The lemurs we studied may also use dry season sleeping sites based on other factors, such as ambient temperature: colder temperatures induce animals to choose sleeping sites that provide them with thermoregulatory benefits such as better insulation (Karanewsky and Wright 2015;Radespiel et al. 1998) or sun exposure. Azara's owl monkeys face similar trade-offs and choose sleeping sites to minimize predation risk while being constrained by thermoregulatory requirements (Savagian and Fernandez-Duque 2017). These primates rest in more exposed places during colder periods, enabling sunbathing behavior-a possible parallel to Sahamalaza sportive lemurs, which remain active during the day, changing position frequently but not leaving their sleeping sites (Ruperti 2007;Seiler et al. 2013). Colder temperatures at night may have induced the lemurs to rest in exposed sites that allow for sunbathing to rewarm faster as in the marsupial fat-tailed antechinus (Pseudantechinus macdonnellensis: Geiser et al. 2002). Suitable, sun-exposed sleeping sites that provide protection from predators may be limited in the colder period of the year, which may explain why the lemurs returned to the same one or two locations during the late dry season. The spread in sleeping site locations may have been influenced by social drivers: sleeping sites used during the late wet season, which coincides with the premating period, were furthest apart and located in multiple places within the home range, some especially close to the borders. During such periods, the study individuals may have sought to mark their territories by sleeping in and marking multiple locations (Day and Elwood 1999;Reichard 1998;Singhal et al. 2007). In conclusion, Sahalamaza sportive lemurs showed seasonal changes in activity budgets and sleeping site locations, as well as fluctuations in travel distances across the year. The underlying drivers of seasonally changing behavior remain to be studied in detail. Future studies should also aim to understand variation in sportive lemur ecology, in particular 1) the physiological adaptations of this genus to strong temperature fluctuations, 2) the trade-off between predation pressure and thermoregulatory requirements, and 3) the nutrient requirements and feeding behavior across all season. Our results reflect findings from studies of other primate taxa: seasonal changes in climate, and accompanying resource abundance, influence primate behavioral ecology (Hemingway and Bynum 2005). Primates display behavioral flexibility and energy-conserving behavior when faced with environmental stressors such as low food availability or temperature fluctuations . In view of expected long-term changes in global climate, research on the tolerance and degree of flexibility across primates is required to anticipate species' reactions (Beever et al. 2017).
2018-12-20T14:51:27.458Z
2018-09-11T00:00:00.000
{ "year": 2018, "sha1": "b696060f7b89574f088741e8445ec0e2b55f0187", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10764-018-0059-1.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "b705e1b175c3f51a93c215c655dde6e1db9c97c6", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
1977029
pes2o/s2orc
v3-fos-license
Abiraterone acetate, exemestane or the combination in postmenopausal patients with estrogen receptor-positive metastatic breast cancer† Resistance to nonsteroidal aromatase inhibitors is a major obstacle in the management of estrogen receptor-positive postmenopausal metastatic breast cancer. The addition of abiraterone acetate to exemestane did not improve clinical outcomes compared with exemestane alone in an androgen receptor-enriched population, potentially due to induced serum progesterone as a resistance mechanism. introduction Approximately 75% of human breast cancers express the estrogen receptor (ER). A substantial reduction in breast cancer mortality has been reached with endocrine therapy, including nonsteroidal aromatase inhibitors (NSAIs), which inhibit aromatization of androgens and reduce tumor proliferation [1,2]. Acquired resistance after initial treatment frequently occurs, however, and is a major obstacle in the clinical management of this population. Active androgen receptor (AR) signaling may contribute to metastatic breast cancer (MBC) resistance to NSAIs. The AR is expressed in 50%-70% of all breast cancers and in ∼80%-90% of ER+ breast cancers, indicating potential androgen responsiveness [3,4]. Furthermore, AR overexpression has been demonstrated in the development of tamoxifen resistance in an ERα+ breast cancer in vitro and in xenograft models [5]. Other studies have shown that androgens stimulate oncogenic human epidermal growth factor receptor 2 and other signaling pathways by transcriptional upregulation of AR-dependent genes [6]. Due to a potential role of AR, novel anti-androgen signaling therapies may offer new strategies for reversing NSAI resistance. Improvement of survival outcomes by abiraterone acetate, the prodrug of abiraterone, is attributed to its inhibition of persistent adrenal, testicular and intratumoral androgen synthesis via cytochrome P450 C17 (CYP17) in metastatic castration-resistant prostate cancer [7,8]. Since abiraterone-induced inhibition of CYP17 decreases the synthesis of both androgens and estrogens, abiraterone plus an NSAI may more adequately inhibit estrogen synthesis in breast cancer patients than NSAIs alone. Antitumor activity of abiraterone acetate has been observed in AR+ and ER+ breast cancer patients resistant to endocrine therapy in a phase I/II trial in postmenopausal breast cancer patients with two or more prior endocrine therapies. Abiraterone acetate reduced both androgen and estradiol concentrations below the limits of detection following 1 month of treatment [9]. Seven of 32 ER+ patients (22%) had stable disease for ≥24 weeks, with one patient having a confirmed partial response lasting 13.8 months [9]. The objective of this study was to assess the efficacy and safety of abiraterone acetate with or without exemestane (E) versus E alone to support the hypothesis that combined inhibition of androgen and estradiol biosynthesis may provide clinical benefit to patients with NSAI-resistant ER+ postmenopausal breast cancer with and without AR+ disease. patient population Eligible patients included postmenopausal women aged ≥18 years with ER+ MBC sensitive to letrozole or anastrozole before disease progression (stable disease or an objective response for ≥6 months in the metastatic setting, or relapse free for ≥2 years in the adjuvant setting). Additional inclusion criteria included ≤2 prior regimens in the metastatic setting (≤1 chemotherapy) and an Eastern Cooperative Oncology Group performance status (ECOG PS) of ≤1. Patients were excluded if they had received prior exemestane, ketoconazole (non-topical, ≥7 days), aminoglutethimide or a CYP17 inhibitor. study design and treatments Patients were stratified according to the number of prior therapies in the metastatic setting (0 or 1 versus 2) and the setting of prior NSAI treatment (adjuvant versus metastatic), and randomized (1 : 1 : 1) to receive 1000 mg abiraterone acetate plus 5 mg prednisone (AA), AA with 25 mg exemestane (AAE), or 25 mg exemestane alone (E) once daily in continuous 28-day cycles. The primary end point was progression-free survival (PFS), and secondary end points included overall survival (OS), overall response rate (ORR) defined as complete response or partial response confirmed by next assessment at least 28 days later, duration of response, clinical benefit rate (CBR) defined with the same criteria as ORR but also including patients who had at least 6 months of stable disease and blood hormone concentrations. Clinical assessments were conducted at prespecified visits and included safety evaluations for all patients who received at least one dose of study drug. Patients continued study treatment until disease progression, unacceptable toxicity or death. Patients assigned to the E arm could be switched to the AA arm at disease progression at the investigator's discretion. Patients were then followed every 3 months until death, loss to follow-up, consent withdrawal or discontinuation of AA for this indication. Patients with no disease progression over 12-treatment cycles were allowed to continue study treatment at the investigator's discretion. A planned interim analysis was conducted when 110 PFS events ( progression or death) in total were observed. A data review committee monitored treatment efficacy and safety during the trial. The review boards at all participating institutions approved the study, and all patients gave written informed consent. A central pathology review was planned and conducted for this analysis at PhenoPath Labs (Seattle, WA, USA). Expression of ER (Thermo Fisher #RM-9101-S; clone SP1), progesterone receptor (PR; Dako # M3569, clone PgR636) and AR (Dako, #M3562, clone AR441) were assessed by immunohistochemical staining of formalin-fixed paraffin-embedded tissues (FFPETs) collected from patients at diagnosis. The percentage of positively staining cells, intensity of staining (weak, moderate and strong) and presence of positive internal controls were evaluated. A cutoff of ≥10% was used to define ER+ and PR+ disease for randomization. AR positivity was defined as ≥10% nuclear staining. Liquid chromatography coupled with tandem mass spectrometry (Applied Biosystems/MDS Analytical Technologies, Foster City, CA, USA) was used to determine testosterone, estradiol and estrone concentrations. Progesterone concentrations were determined using a competitive binding immunoenzymatic assay (Beckman Coulter Access Progesterone assay, Beckman Coulter, Inc., Brea, CA) and the Dxi 800 instrument (Covance Laboratories, Indianapolis, IN). statistical analysis Enrollment of ∼300 patients (100 per arm) was planned. A hazard ratio (HR) of 0.65 (median PFS 6.2 and 4.0 months, respectively) was assumed for each pairwise comparison (AA or AAE compared with E) with an 80% power and two-sided alpha 0.10; ∼150 PFS events for each pairwise comparison were required. Efficacy analyses were conducted on the intent-to-treat population (all patients randomized) or patients having measurable disease at baseline. One interim efficacy/futility analysis was implemented at ∼50% of the expected PFS events. Time-to-event (PFS, OS, and duration of response) distribution and median value were estimated using the Kaplan-Meier product-limit method. Stratified Cox proportional hazards model was used to estimate the pairwise HR, and the stratified log-rank test was used in the testing of the treatment effect. ORR and CBR between each treatment pair were estimated using χ 2 test or Fisher's exact test. No adjustment on the type I error rate was planned for each of the pairwise comparisons. At the clinical cutoff, most patients discontinued treatment due to progressive disease: 72.5%, 80.5% and 75.0% of patients in E, AA and AAE arms, respectively ( Figure 1). Treatment was ongoing at cutoff for 23 (E), 9 (AA) and 15 (AAE) patients. One patient died in the E arm. Treatment duration among the three arms was comparable, with a median exposure of 3.7 months. efficacy outcomes The median follow-up time for PFS was 11.4 months. Statistically significant improvement in the primary end point of PFS was not observed with AA versus E (3.7 versus 3.7 months, HR = 1.1; 95% CI 0.82-1.60; P = 0.437) or AAE versus E (4.5 versus 3.7 months; HR = 0.96; 95% CI 0.70-1.32; P = 0.794; Figure 2A). No difference in PFS between the three arms was observed in the AR+ subpopulation (data not shown). A sensitivity analysis was conducted at different cutoffs for AR positivity. No significant differences in PFS by the level of AR positivity were observed (data not shown). PFS HRs observed in the PR− versus PR+ subgroups differed between the AAE and E arms (HR = 0.545; 95% CI 0.295-1.007 versus HR = 1.138; 95% CI 0.781-1.658, respectively; Figure 2B). No significant differences among treatment arms were noted for the secondary end points. Median OS has not been reached for any treatment arm. ORR was higher in the AAE arm versus the E arm (12.1% versus 6.3%; P = 0.366). The median duration of response for patients with a measurable disease at baseline for AAE versus E was 6.9 and 6.5 months, respectively (P = 0.625). The CBR was higher in the AAE arm (22.7%) versus the E arm (12.7%) (P = 0.137). Thirty-one patients crossed over to AA following progression on E. The crossover was discontinued when enrollment to the AA arm was discontinued. No patient, before or after crossover, had a complete response, partial response or ≥6 months of stable disease upon crossing over to AA. As expected, a significant (P < 0.001) decrease in serum testosterone beginning at cycle 2 day 1 was observed for the AA arms ( Figure 3A). Reductions in estrone and estradiol were also observed across all treatment arms ( Figure 3B and C). Progesterone serum concentrations were significantly (P < 0.001) increased above the upper limit of normal physiological concentrations for postmenopausal women at cycle 2 day 1 in all patients treated with AA but not with E alone ( Figure 3D). safety Treatment-emergent adverse events (TEAEs) occurred in 88 (86.3%), 80 (92.0%) and 93 (89.4%) patients in the E, AA and AAE arms, respectively ( Table 2). Twenty-two (25.3%) patients in the AA arm and 34 (32.7%) patients in the AAE arm experienced grade 3/4 TEAEs versus 23 (22.5%) patients observed in the E arm. The majority of these TEAEs (≥89% in each arm) were not study drug-related. TEAEs leading to death were reported for 1 (1.0%) and 2 (1.9%) patients in the E and AAE arms, respectively, although none were considered study drug-related (Table 2). Treatment discontinuation due to TEAEs occurred in 2 (2.0%), 4 (4.6%) and 10 (9.6%) patients in the E, AA and AAE arms, respectively. The only TEAEs leading to treatment discontinuation in more than one patient in any arm were vomiting, aspartate aminotransferase (AST) increase and dyspnea (two patients in the AAE arm). AA-associated TEAEs of special interest occurred in 42 (41.2%), 44 (50.6%) and 56 (53.8%) patients in the E, AA and AAE arms, respectively ( Table 2). One grade 4 hypokalemia was observed in each of the AA and E arms. Grade 3/4 hepatic function abnormalities (i.e. AST and alanine aminotransferase increase) occurred in 3.9% and 2.9% of patients in the E arm, 2.3% of patients in the AA arm, and 4.8% and 2.9% of patients in the AAE arm. discussion We present the first randomized study evaluating the efficacy and safety of the androgen biosynthesis inhibitor abiraterone acetate in breast cancer patients. This study did not meet its primary end point of a significant difference in PFS with AA or AAE compared with E alone, but showed modest improvements in secondary end points. As this study aimed to determine whether AR-mediated signaling plays an important role in NSAI resistance in AR+ disease, it was designed to shows substantial magnitude of PFS benefit between treatment arms. Despite limitations of this study, including small sample size, statistical modeling based on these trial results suggests that a significant improvement in clinical outcomes would likely not be achieved with a larger trial. Therefore, a phase III trial will not be pursued. Although we demonstrated that abiraterone inhibits testosterone as well as estradiol and estrone synthesis (Figure 3), the lack of superior efficacy of AAE may imply that AR signaling is not an important driver of breast cancer growth following NSAI therapy. At present, it has not been demonstrated that AR signaling drives de novo or acquired resistance to endocrine therapy [6], although some preclinical studies suggest that this is true [5,10]. The AA-induced increases in serum progesterone concentrations observed in all patients may have attenuated any antitumor activity due to androgen biosynthesis inhibition in this study. AA-mediated inhibition of androgen biosynthesis may lead to a subsequent diversion into the progesterone synthetic pathway via adrenal dehydroepiandrosterone and pregnenolone [11]. AA-induced elevated progesterone concentrations could then provide a growth stimulus through the PR or through other mechanisms [12][13][14]. In patients with ER+ breast cancer, the causes of NSAI resistance are probably multifactorial, including differences in germline pharmacogenomics and somatic changes in tumor biology. In such cases, simple reduction in estrogen and androgen concentrations may be insufficient to overcome the acquired resistance due to these mutations. Although the safety analysis is limited by the small number of patients in each treatment arm, the frequency and clinical pattern of AEs, including those of special interest (e.g. hypertension, fluid Table 2), are consistent with the safety profile of AA in patients with metastatic castrationresistant prostate cancer [7,8]. In conclusion, we have shown that although abiraterone inhibited androgen biosynthesis in postmenopausal patients with ER+, AR+ NSAI-resistant MBC, elevated progesterone concentrations induced by AA and/or the heterogeneous mechanisms of resistance to NSAIs in this patient population may explain the lack of superiority of the combination of an NSAI and AA over E. It is possible that AAE is more effective in patients with ER+ PR-negative disease; this hypothesis would need prospective evaluation. These trial results do not exclude the possibility of clinical benefit from AR-signaling disruption in NSAI-resistant ER+ postmenopausal breast cancer as AA demonstrated similar activity to E alone. Further studies of survival outcomes by AR status are ongoing. Additional biomarker analyses are being conducted to potentially identify patient subgroups that manifest AA sensitivity or resistance and to evaluate
2017-09-07T06:05:55.051Z
2015-10-26T00:00:00.000
{ "year": 2015, "sha1": "6625f839cb5bfd4b6222ed5efad2e123949c9ac5", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1093/annonc/mdv487", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "6625f839cb5bfd4b6222ed5efad2e123949c9ac5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247297457
pes2o/s2orc
v3-fos-license
The gut microbiome influences host diet selection behavior Significance The behavior of diet selection or diet choice can have wide-reaching implications, scaling from individual animals to ecological and evolutionary processes. Previous work in this area has largely ignored the potential for intestinal microbiota to modulate host foraging decisions. The notion that the gut microbiome may influence host foraging behavior has been highly speculated for years but has not yet been explicitly tested. Here, we show that germ-free mice colonized by differential microbiomes from wild rodents with varying natural feeding strategies exhibited significant differences in their voluntary dietary selection. Specifically, colonized mice differed in voluntary carbohydrate selection, and divergent feeding preferences were associated with differences in circulating essential amino acids, bacterial tryptophan metabolism, and intestinal morphology. Together, these results demonstrate a role for the microbiome in host nutritional physiology and foraging behavior. Diet selection is a fundamental aspect of animal behavior with numerous ecological and evolutionary implications. While the underlying mechanisms are complex, the availability of essential dietary nutrients can strongly influence diet selection behavior. The gut microbiome has been shown to metabolize many of these same nutrients, leading to the untested hypothesis that intestinal microbiota may influence diet selection. Here, we show that germ-free mice colonized by gut microbiota from three rodent species with distinct foraging strategies differentially selected diets that varied in macronutrient composition. Specifically, we found that herbivore-conventionalized mice voluntarily selected a higher protein:carbohydrate (P:C) ratio diet, while omnivore-and carnivore-conventionalized mice selected a lower P:C ratio diet. In support of the long-standing hypothesis that tryptophan-the essential amino acid precursor of serotonin-serves as a peripheral signal regulating diet selection, bacterial genes involved in tryptophan metabolism and plasma tryptophan availability prior to the selection trial were significantly correlated with subsequent voluntary carbohydrate intake. Finally, herbivore-conventionalized mice exhibited larger intestinal compartments associated with microbial fermentation, broadly reflecting the intestinal morphology of their donor species. Together, these results demonstrate that gut microbiome can influence host diet selection behavior, perhaps by mediating the availability of essential amino acids, thereby revealing a mechanism by which the gut microbiota can influence host foraging behavior. gut microbiome j animal behavior j diet choice Proper nutrition is essential to life, and thus animals have evolved complex internal sensory systems that help maintain nutritional homeostasis by regulating macronutrient intake (1). The intestinal tract plays a critical role in this process by liberating dietary nutrients (e.g., essential amino acids [EAAs]) that communicate meal quality to the central nervous system by direct stimulation of enteric nerves or through postabsorptive peripheral signals (2)(3)(4). The intestinal tract also harbors trillions of microorganisms (collectively known as the gut microbiome), which have been shown to influence numerous aspects of host behavior, most likely through metabolites that interact with host sensory systems (5). Given the importance of dietary nutrients in the regulation of food intake and diet selection (6), the gut microbiome may influence host foraging behavior through metabolic processes that affect the availability of nutrients (or their derivatives) recognized by the central nervous system (2,7,8). For example, a recent study showed that experimental colonization of Providencia bacteria in the gut of the model organism Caenorhabditis elegans resulted in divergent foraging preferences through the bacterial synthesis of the neurotransmitter tyramine from the EAA tyrosine (9). While studies in model systems provide powerful opportunities to dissect host-microbe interactions (10), the microbiome field recognizes the need to address and study the complexity of these interactions in ecologically realistic scenarios in which animals can harbor thousands of microbial taxa (11,12). It has been suggested that these complex microbial communities could elicit host foraging behaviors that enrich the intestinal environment in nutrients on which they depend (i.e., promoting their own fitness) (7), while others have posited that a positivefeedback relationship between dietary nutrients and microbial community composition eventually results in stable microbial communities and host foraging behaviors (8). However, these potential mechanisms operate under the assumption that the gut microbiome influences diet selection behavior-a hypothesis that has existed for years (7,8) but has never been tested using complex microbial communities or within an ecological or evolutionary context. The transplantation of intestinal microbiota into germ-free mice is a powerful approach for disentangling the effects of the gut microbiome on host phenotypes from other potentially confounding factors (e.g., host genetics) (13). This approach has been successfully applied using a wide range of donor species (e.g., termites, zebrafish) (14), demonstrating that germ-free mice are a tractable model system for understanding the Significance The behavior of diet selection or diet choice can have widereaching implications, scaling from individual animals to ecological and evolutionary processes. Previous work in this area has largely ignored the potential for intestinal microbiota to modulate host foraging decisions. The notion that the gut microbiome may influence host foraging behavior has been highly speculated for years but has not yet been explicitly tested. Here, we show that germ-free mice colonized by differential microbiomes from wild rodents with varying natural feeding strategies exhibited significant differences in their voluntary dietary selection. Specifically, colonized mice differed in voluntary carbohydrate selection, and divergent feeding preferences were associated with differences in circulating essential amino acids, bacterial tryptophan metabolism, and intestinal morphology. Together, these results demonstrate a role for the microbiome in host nutritional physiology and foraging behavior. function of gut microbiota in evolutionarily distant organisms. In one notable example, Sommer et al. (15) used fecal microbiome transplants from brown bears into germ-free mice (two species separated by ∼94 million years of evolution) to show that seasonal changes in gut microbiota influence host energy metabolism. In our study, we used this approach to determine whether the gut microbiome influences diet selection behavior. We chose three rodent species with distinct foraging strategies as microbial donors for germ-free mice: a carnivore/insectivore (southern grasshopper mouse, Onychomys torridus), an omnivore (white-footed mouse, Peromyscus leucopus), and an herbivore (montane vole, Microtus montanus). These three species are in the same taxonomic family (Cricetidae) and are all equally distantly related to laboratory mice (∼27 Mya; Mus musculus, family Muridae) (16). Under sterile laboratory conditions, we randomly divided 30 adult male germ-free mice into carnivore-conventionalized (Carn-CONV), omnivoreconventionalized (Omni-CONV), and herbivore-conventionalized (Herb-CONV) treatment groups (n = 10 mice per group), where each mouse in a given group was inoculated with the cecal contents of a unique, wild-caught donor individual (to better reflect natural interindividual variation) (Fig. 1A). One recipient mouse from the Herb-CONV group was excluded from our dataset due to aberrant behaviors that indicated possible injury during microbiome transplants. Conventionalized mice were acclimated to their microbiota for 7 d, during which they were offered only sterile water and a low protein:carbohydrate (LPC)-ratio diet (SI Appendix, Table S1). There were no differences in daily or cumulative macronutrient and food intake across treatment groups during the acclimation period (SI Appendix, Fig. S1 and Dataset S1). After acclimation, conventionalized mice were given a choice between the LPC diet and one with a higher protein:carbohydrate (HPC) ratio; SI Appendix, Table S1) for a period of 11 d (Fig. 1A). Importantly, these diets had identical energy densities (caloric content per gram). To determine whether treatment groups differed in foraging behavior, we employed a state-space approach known as the Geometric Framework, in which foraging decisions are analyzed within a multidimensional nutritional space where each functionally relevant nutrient forms a single dimension (17,18). In this study, we defined these nutritionally explicit dimensions as protein and carbohydrate intake, thereby allowing us to measure the effect of the gut microbiome on host diet selection. Supporting the hypothesis that the gut microbiome influences diet selection behavior, this approach revealed statistically significant differences in macronutrient intake across groups of conventionalized mice (Fig. 1B). Treatment groups differed significantly in daily (SI Appendix, Fig. S1) and cumulative carbohydrate intake ( Fig. 1B) during the diet selection trial. Specifically, Herb-CONV mice voluntarily consumed fewer carbohydrates than Carn-CONV and Omni-CONV mice. This trend was most apparent after ∼1 wk of diet choice (SI Appendix, Fig. S1), suggesting that it may take time for internal nutritional signals to stabilize (19) and for associative learning (20) to affect host feeding behavior. In contrast, treatment groups did not differ in either daily (SI Appendix, Fig. S1) or cumulative protein intake (Fig. 1B). Lower cumulative carbohydrate intake among Herb-CONV mice led to their selection of a significantly higher P:C-ratio diet compared to Omni-CONV and Carn-CONV mice (SI Appendix, Fig. S2). Interestingly, we also observed a significant difference in total food intake among Herb-CONV mice compared to the other treatment groups (SI Appendix, Fig. S1), suggesting that Herb-CONV mice's preference for the HPC-ratio diet may have permitted them to reduce total energy intake without affecting nutritional homeostasis (i.e., protein-leveraging) (19). Under natural scenarios, such differences in selected P:C ratios could be accomplished by animals incorporating different levels of insects, seeds, or foliage into their diets. The ratio of macronutrients an animal consumes, rather than the total amount of any individual nutrient, has significant effects on animal physiology, life history, and reproductive fitness (21-23). The preference of Herb-CONV mice for the HPC diet is also consistent with previous studies showing that Microtus voles prefer highprotein foods when available (24,25), though a follow-up study on the foraging preferences of M. montanus with respect to specific dietary nutrients would more robustly support the ecological significance of our findings. More generally, these results are also consistent with the "nitrogen limitation hypothesis," which posits that the relative scarcity of nitrogen in plant materials may drive the opportunistic consumption of higher protein foods among herbivores (26)(27)(28). Interestingly, the hindgut microbiota of herbivorous mammals are also nitrogen limited (29), so our findings offer support to the hypothesis that microbes may alter host foraging behaviors to enrich the intestinal environment in necessary nutrients (7). Next, we characterized day 0 (7 d after inoculation and just prior to diet selection trial) gut microbial community structure, microbiome function, and plasma metabolites of conventionalized mice to determine how these aspects were associated with differential diet selection across treatment groups. The 16S ribosomal RNA (rRNA) inventories confirmed that both donors and recipients harbored distinct bacterial communities that differed significantly from blank extraction controls ( Fig. 1C and SI Appendix, Figs. S3 and S4). We observed significant differences in colonization efficiency across treatment groups. Specifically, microbial communities of Carn-CONV and Omni-CONV recipients were significantly most similar to those of their donors, while Herb-CONV recipients were not significantly similar to any donor group (SI Appendix, Fig. S4). It is expected that recipient communities would not match donors identically, as the Mus host physiology reshapes donor communities (30); in addition, our donor communities were collected from individuals in the wild, and thus our design does not account for the well-documented effects of captivity on the microbiome (31). The comparatively lower colonization efficiency among Herb-CONV mice may have been driven by the low content of indigestible plant fibers that are primarily fermented by microbes. Even in established microbiomes, differences in the content or composition of dietary fiber can result in the extirpation of some fermentive microbes (32,33). However, Herb-CONV mice were successfully colonized by donor microbiota in the phylum Firmicutes (classes Bacilli and Clostridia), notably those in the family Lachnospiraceae, which are strict anaerobes known for their ability to transform plant fibers into volatile fatty acids in the mammalian digestive tract (34). Additionally, microbiomes from herbivorous mammals colonize germ-free mice at a lower absolute density than microbiomes from omnivorous or carnivorous mammals (35). More work is required to understand differential transfer of microbiomes across species, and we discuss this limitation in more detail below. Bacterial amplicon sequence variant (ASV) richness and phylogenetic diversity were similar across donor groups but significantly lower in Herb-CONV mice compared to the other treatment groups (SI Appendix, Fig. S4). In general, the bacterial communities of conventionalized mice were dominated by the phyla Bacteroidetes and Firmicutes (SI Appendix, Fig. S4). Importantly, all recipient fecal samples tested negative for the presence of pathogenic microorganisms (see Methods). Metagenomic analysis of recipient fecal samples revealed a statistically significant effect of donor species on the relative abundances of 183 (51%) Kyoto Encyclopedia of Genes and Genomes (KEGG) functional modules ( Fig. 1D and Dataset S2). These differences in microbiome community structure and function were accompanied by concomitant differences in plasma metabolites (Fig. 1E), with 27 identified metabolites (16%) differing significantly across treatment groups (Dataset S3). Together, these results demonstrate that interspecific differences in gut microbial communities across rodents with divergent foraging strategies translate to distinct microbial functions and metabolite profiles independent of host diet. There is substantial evidence that the availability of circulating EAAs provides peripheral signals that act to regulate macronutrient intake and diet selection (4,6). Despite consuming identical diets prior to the selection trial, treatment groups differed in circulating levels of several amino acids, with Herb-CONV mice exhibiting significantly higher amounts of the EAAs lysine, isoleucine, methionine, phenylalanine, and tryptophan ( Fig. 2A). While EAAs are primarily derived from the diet, bacteria can also produce these peptides through their own metabolic processes (36), and thus the gut microbiome may act as a source of EAAs for their hosts. In support of this hypothesis, treatment groups exhibited broad differences in the microbial synthesis and degradation of EAAs (Fig. 2B). Notably, the microbiome of Herb-CONV mice had a higher abundance of genes involved in the synthesis of aromatic amino acids (phenylalanine, tryptophan, and tyrosine) (Fig. 2B), all of which are synthesized from chorismate (product of the Shikimate pathway) (37). The ratios of bacterial genes involved in tryptophan biosynthesis (M00023) to those involved in tryptophan degradation via the kynurenine pathway (M00038) were significantly correlated with plasma tryptophan (Fig. 2C). Given that conventionalized mice consumed identical diets prior to blood collections, these results demonstrate that bacterial metabolism can alter the availability of circulating levels of plasma EAAs, consistent with recent studies conducted in Drosophila (38). There is emerging evidence that bacterial tryptophan metabolism is a key mechanism by which the gut microbiome can influence host behavior (39,40). This relationship is a consequence of tryptophan's role as the primary regulatory molecule for the synthesis of central serotonin (5-hydroxytryptamine ) (41), which has been shown to drive foraging behavior and diet selection in several experimental studies (42,43). For example, when given a choice between low-or highcarbohydrate meals, rats receiving hypothalamic injections of 5-HT significantly reduced their carbohydrate intake (44). Importantly, serotonin synthesis is extraordinarily sensitive to plasma tryptophan availability, and thus plasma tryptophan is generally considered a reliable proxy for central serotonin (45). Therefore, we predicted that plasma tryptophan would be associated with differences in diet selection among conventionalized mice. Indeed, we found a statistically significant correlation between day 0 plasma tryptophan and subsequent voluntary carbohydrate intake (Fig. 2C). More recent work has argued that serotonin synthesis is affected by the availability of tryptophan relative to the large neutral amino acids (LNAAs; Leu, Ile, Phe, Tyr, and Val) that compete for transport across the blood-brain barrier (46). Consistent with these studies, we found a statistically significant correlation between day 0 Trp:LNAA ratios and cumulative carbohydrate intake (Fig. 2C). Further, the ratios of tryptophan biosynthesis and degradation KEGG modules were also statistically significant predictors of carbohydrate and P:C intake (Fig. 2C). Overall, these results support the hypothesis that bacterial tryptophan metabolism influences host diet selection behavior. Interspecific differences in foraging behavior are generally associated with diet-specific adaptations to intestinal physiology. For example, herbivores generally maintain an enlarged cecum (fermentation chamber) that enhances the digestibility of lowquality, carbohydrate-rich foods (47). Given that the gut microbiome can profoundly alter host intestinal gene expression and physiology (48)(49)(50), divergent microbial communities may drive differences in intestinal morphology across feeding strategies. At Fig. 2. Day 0 plasma tryptophan availability and bacterial tryptophan metabolism are associated with differential macronutrient intake across treatment groups. (A) Heatmap illustrating broad differences in plasma levels of EAAs across treatment groups, with Herb-CONV mice exhibiting significantly greater levels of lysine (χ 2 = 6.13, P = 0.047), isoleucine (χ 2 = 11.42, P = 0.003), methionine (χ 2 = 6.13, P = 0.047), phenylalanine (χ 2 = 6.13, P = 0.047), and tryptophan (χ 2 = 9.10, P = 0.011) compared with Carn-CONV and Omni-CONV mice. Columns represent individual conventionalized mice for each treatment group. * denotes P ≤ 0.05, and color indicates the treatment group with greatest circulating plasma levels (red = Carn-CONV, blue = Omni-CONV, and yellow = Herb-CONV). (B) Heatmap illustrating broad differences in the abundances of microbial genes associated with metabolism of EAAs (Dataset S2). * denotes P ≤ 0.05, and color indicates the treatment group with greatest relative abundance. (C) Correlation plot summarizing relationships between plasma tryptophan availability (Plasma Trp, Plasma Trp:Large Neutral Amino Acids), bacterial tryptophan metabolism (Trp Synthesis, Trp Degradation, Trp Synthesis:Degradation), and host diet selection (Carbohydrate Intake, Protein Intake, P:C Intake) among conventionalized mice. The direction and color of the ellipses indicate whether correlations were positive or negative, and asterisks indicate whether Spearman's correlations were statistically significant (* denotes P ≤ 0.05, ** denotes P < 0.01, and *** denotes P < 0.001). the conclusion of the diet selection trial (day 11), we quantified intestinal morphology with the prediction that conventionalized mice would exhibit differences that broadly reflected that of their donor species. While there was no change in body mass over the duration of the experiment (F = 1.01, P = 0.377), treatment groups differed significantly in empty colon mass (Fig. 3B), with Herb-CONV mice exhibiting comparatively larger colons than those in other treatment groups. There were no significant differences in cecum mass (Fig. 3A) or colon length (Fig. 3C). In general, the comparatively larger colons observed in Herb-CONV mice are consistent with evolutionary adaptations observed in herbivorous animals, which generally maintain larger hindguts to promote digestion (47). The gut is a highly dynamic organ that can rapidly change in mass and length in response to environmental conditions, often through altered rates of cellular proliferation in intestinal crypts and cell loss through sloughing or apoptosis at the ends of intestinal villi, but also through the change in the size of individual enterocytes (51). In the future, histological analyses could be conducted to investigate whether these changes in gut size are driven by hyperplasia (increase in cell number) and/or hypertrophy (increase in cell size) and to rule out the possibility for these differences to be driven by intestinal inflammation. While the observed differences in gut size are consistent with adaptations observed in herbivores, our study only tested the microbiome of a single species from each feeding strategy. A more robust test of whether the microbiome recapitulates the differences in gut size observed across feeding strategies would require several donor species from each dietary strategy. Another question is whether the gut microbiome affected intestinal morphology directly or via differential diet selection. While our experimental design makes it difficult to disentangle the effects of differential diet selection from those of microbiome, it is worth noting that previous work has demonstrated that laboratory mice fed LPC-ratio diets had larger intestinal compartments (e.g., colon) compared to those fed higher P:C diets (50). In our study, we observed the opposite-Herb-CONV mice, which consumed a HPC-ratio diet (Fig. 1B), exhibited larger colon masses (Fig. 3). These results contradict the generally accepted model of adaptive physiological responses to dietary carbohydrates, suggesting that the gut microbiome may drive interspecific differences in host intestinal physiology to some extent, independent from the effects of diet and genetics. Here, we present evidence for an effect of the gut microbiome on host diet selection behavior; however, it is important to recognize that our approach has several substantial limitations. For example, the relative differences in nutrient composition between diets have been shown to greatly influence animals' ability to distinguish and differentially feed (19), suggesting that our differential diet selection results may have been more pronounced if we had used diets with greater differences in macronutrient content. Further, previous work has shown that the evolutionary distance between donor species and germfree mice can affect the efficacy of microbiome transplants (14). While our selected donor species were similarly distant to M. musculus, there were significant differences in colonization success across donor species, suggesting that cecal microbiota may be specifically adapted to their hosts. While differences in colonization efficiency may limit our ability to robustly connect our study to the ecology of donor species, this limitation should not diminish our major finding that conventionalized germ-free mice harboring compositionally and functionally distinct microbiotas differing in microbial diversity exhibited different feeding preferences. Overall, our approach is stronger than comparing conventional mice with the highly artificial state of germ-free mice, and the complex microbial communities that we used better reflect reality, which is recognized as a pressing need in the field of host-microbe interactions (11,12). In this study, we found that conventionalized germ-free mice harboring distinct gut microbiota exhibited significant differences in diet selection behavior, providing support for our core hypothesis that microbiota can influence foraging decisions. Specifically, our study provides evidence that variation in the gut microbiota alters host nutrient availability and can yield significant differences in the diet selection of conventionalized mice in just 11 d, likely through differential bacterial metabolism and downstream availability of EAAs, especially tryptophan. These findings are largely consistent with recent mechanistic work in model systems (9,38) but address the natural variation in microbial communities that exist among individuals and across species (11,12). Therefore, this study not only represents a contribution to a large body of work showing that the gut microbiome is a key player in host physiology and performance (52) but also more broadly supports the hypothesis that the gut microbiota can influence ecological and evolutionary processes shaping animal behavior. Foraging strategies and feeding behaviors can influence many aspects of an animal's ecology [e.g., the need to obtain specific nutrients while also avoiding predators (53)], and animal feeding can also shape the structures of entire plant and animal communities (54). Thus, there may be an underexplored role for gut microbes in influencing far-reaching aspects of animal and ecosystem ecology through influencing the feeding behavior of their hosts. Cecum contents for microbiome transplants were transferred to 1.7-mL Eppendorf tubes using sterile instruments and temporarily frozen at À20°C in the field before long-term laboratory storage at À80°C. Materials and Methods Microbiome Transplants. Donor cecum contents were diluted at 100 mg/mL in sterile phosphate-buffered saline containing 0.2 g/L Na 2 S and 0.5 g/L cysteine as reducing agents (55,56). Under sterile laboratory conditions, 30 adult (aged 6 to 8 wk) male germ-free C57BL/6 mice (Taconic Biosciences, Inc.) were randomly divided into Carn-CONV, Omni-CONV, and Herb-CONV groups (n = 10 mice per group), where each mouse in a given group was colonized by oral gavage of 200 μL of fecal slurry from a unique, wild-caught donor individual. Conventionalized mice were then singly housed in sterile static cages (Innovive, Inc., MSX2-AD) modified by the addition of two feeder hoods (Laboratory Products, Inc., 2110S) that prevent mice from caching powdered diets, thus enabling the tracking of daily macronutrient intake (see below). Due to a lack of similar studies on this topic, we were unable to conduct an a priori power analysis to justify the number of donor/recipient mice per group. Instead, we decided on n = 10 per group based on the number of animals typically used in studies involving germ-free mice, the vast majority of which used 5 to 10 individuals per group (13). One recipient mouse from the Herb-CONV group (V57) was excluded from our dataset due to aberrant behaviors that indicated possible injury during microbiome transplants. All recipient fecal samples were screened for 21 of the most common rodent pathogenic microorganisms using PCR tests conducted by a third-party diagnostic company (Charles River Research Animal Diagnostic Services, Wilmington, MA). Diet Selection Experiment. After colonization, conventionalized mice were acclimated for 7 d [to allow the gut microbiome to stabilize (55)], during which they were offered only sterile water and an LPC-ratio diet (0.27; SI Appendix, Table S1), as this diet is rather similar to standard mouse chow. After acclimation (day 0), mice were briefly removed from their cages for a 200-μL blood draw for metabolomics analysis (see details below). Mice were weighed (rounded to nearest hundredth) and returned to empty cages to facilitate the collection of fresh fecal samples for 16S rRNA microbial inventories and shotgun metagenomics (see details below). Conventionalized mice were then presented with a choice between two isocaloric diets (SI Appendix, Table S1): 1) the LPC (0.27) diet offered during acclimation and 2) a diet with an HPC ratio (HPC [0.71]). The positions of these two diets were rotated daily to avoid learned preferences. Diets were designed by Teklad/Envigo and were powdered prior to sterilization to be visually indistinguishable from each other and to prevent food caching. Daily food consumption was calculated as the difference between the mass (rounded to nearest thousandth) of each diet presented (∼8 g) and the mass of each diet remaining after a 24-h period. After diet preferences were tracked for 11 consecutive days, animals were euthanized and dissected to investigate differences in the empty masses (rounded to nearest thousandth) of intestinal compartments. Conventionalized mice were maintained on a 12:12-h light:dark cycle, with 21°C ambient temperature and 40% humidity for the duration of the experiment. Animal experiments were conducted at the University of Pittsburgh Plum Borough Primate Facility under IACUC protocol 19074445. Metabolomics. Blood plasma was analyzed for primary metabolites (amino acids, hydroxyl acids, carbohydrates, sugar acids, sterols, aromatics, nucleosides, amines, and miscellaneous compounds) by the West Coast Metabolomics Center at the University of California, Davis, which performed all sample preparation, data acquisition, and data processing as previously described (57). Briefly, metabolites were extracted using a mixture of acetonitrile:isopropanol:water (3:3:2, vol/vol/v) as well as 1:1 acetonitrile:water for removal of protein from serum. Dried metabolite extracts were resuspended in methoxyamine hydrochloride in pyridine for derivatization before being analyzed using gas chromatography-time-of-flight (GC-TOF) using a LECO Pegasus IV mass spectrometer equipped with automated liner exchange (ALEX; Gerstel Corporation) and cold injection system (CIS; Gerstel Corporation) for data acquisition. The CIS temperature was set at 50°C to 250°C final temperature at a rate of 12°C s À1 . Raw GC-TOF mass spectrometry data were preprocessed with ChromaTOF (version 2.32), and apex masses were used to identify metabolites using the BinBase database. Values were reported as peak height for the quantification ion (m/z value) at the specific retention index, which is more precise than peak area for low abundant metabolites. All database entries that were positively detected in more than 10% of the samples of a study design class for unidentified metabolites were reported. Raw peak heights were vector normalized to reduce the impact of between-series drifts of instrument sensitivity caused by machine maintenance status and tuning parameters. DNA Extractions. DNA was extracted from donor cecal contents and day 0 conventionalized mouse feces using the Qiagen PowerFecal DNA Kit (Qiagen, 12830) following the manufacturer's instructions. 16S rRNA Microbial Inventories. Extracted DNA from conventionalized mice and donor cecum contents was amplified and sequenced by the Genome Research Core of the University of Illinois at Chicago as previously described (58). Briefly, PCR was used to amplify a portion of the bacterial 16S rRNA gene for Illumina sequencing using the Earth Microbiome Project primers 515F (GTGCCAGCMGCCGCGGTAA) and 806R (GGACTACNVGGGTWTCTAAT) targeting the V4 region of microbial small subunit ribosomal RNA gene (59). Amplicon libraries were sequenced using a 2 × 251 paired-end run on an Illumina MiSeq. In addition to donor and recipient fecal samples, we sequenced five "blank" extractions to control for the possibility of microbial contamination during the extraction procedure and microbial DNA present in commercial extraction kits (60). A total of 1,398,994 raw Illumina sequencing reads (mean of 22,206 per sample (n = 63) ± 1,111 SE) were paired and quality filtered via the DADA2 pipeline (61) in QIIME2 (version 2020.4) (62) using default parameters. Sequences that passed the quality filter were clustered into ASVs, which were identified using the SILVA reference database (release 138) (63). Identified ASVs were filtered to exclude nonbacterial sequences (archaea, chloroplast, eukaryote, and mitochondria), reducing our total number of reads to 1,396,450 (mean of 22,166 per sample ± 1,112 SE) and 4,359 ASVs. We detected a total of 4,118 ASVs in donor and recipient fecal samples, 19 (0.46%) of which were also detected in blank extractions (total of 260 ASVs from 27,807 reads with mean of 5,561 per sample ± 1,419 SE). As recommended by McMurdie and Holmes (64) in 2014, we used unrarefied ASV tables for comparisons of colonization efficiency (Bray-Curtis distances), alpha diversity (ASV richness and Faith's phylogenetic diversity), and beta diversity [Bray-Curtis and unweighted/weighted UniFrac distances (65)]. Shotgun Metagenomics. Extracted DNA from conventionalized mice was sent to CoreBiome, Inc. (St. Paul, MN) for shotgun metagenomic analysis using Boos-terShot. Briefly, sequencing libraries were prepared using a procedure adapted from the Illumina Nextera Library Prep Kit (Illumina, 20018705) and sequenced on an Illumina NovaSeq using single-end 1 × 100 reads with the Illumina Nova-Seq SP reagent kit (Illumina, 20027464). A total of 122,190,150 raw sequence reads [mean of 4,213,453 per sample (n = 29) ± 151,158 SE] were filtered for low-quality (Q-Score <30) and length (<50), trimmed of adapter sequences, and converted into a single fasta using SHI7 (version 0.99) (66). Sequences were then trimmed to a maximum length of 100 bp and aligned using BURST (version 0.99.8) (67) at 97% identity against CoreBiome's Venti database consisting of all RefSeq bacterial genomes with additional manually curated strains as well as a bacterial KEGG (68) annotated database created from dereplicating the bacterial genes within the Venti database. KEGG orthology counts were converted to relative abundance within a sample and collapsed into KEGG modules for statistical analysis. Statistics. Differences in macronutrient and total diet intake across treatment groups were tested using a multivariate analysis of variance (MANOVA) while controlling for the effects of body mass. A post hoc power analysis for MANOVA was conducted using G*Power (69) (version 3.1) to confirm that statistical power was sufficiently greater than the widely accepted minimum threshold of 0.80 (70). Microbial community structure (from 16S rRNA inventories) was visualized using principal coordinates analysis (PCoA) on ASV relative abundances, which were then assessed for differences (controlling for multiple comparisons using false discovery rate-corrected P values) across treatment groups using nonparametric permutational multivariate analysis of variance (PERMANOVA), analysis of similarity, and permutational analysis of dispersion in QIIME2 (62). Microbiome function was visualized using PCoA on KEGG module relative abundances and analyzed for differences across treatment groups with PERMANOVA in QIIME2. Differences in the relative abundance of functional KEGG modules across conventionalized mice were tested using the nonparametric Kruskal-Wallis test and linear discriminant analysis in LEfSe using the "one-against-all" strategy for multiclass analysis (71). Identified plasma metabolites were filtered (based on mean intensity and interquartile range) and auto-scaled before nonparametric median tests were used to identify metabolites that varied significantly across treatment groups and visualized using supervised partial least square discriminant analysis (PLS-DA) in MetaboAnalyst (version 4.0) (72). Nonparametric Spearman rank correlations between plasma Trp availability, Trp KEGG modules, and macronutrient intake were conducted using nonparametric Spearman's test (controlling for the effect of donor species) in the R package ppcor (version 1.1) (73) and visualized using corrplot (version 0.85) (74). Differences in empty cecum mass, empty colon mass, and colon length across treatment groups were tested using ANOVA with body mass as a covariate and were corrected for multiple comparisons using Tukey's post-hoc test. Unless otherwise noted, all statistical tests were two-sided and were conducted in JMP Pro version 14.1.0 (SAS Institute Inc.). For all statistical analyses, P values ≤ 0.05 were defined as significant. Data Availability. Sequencing data have been deposited in the National Center for Biotechnology Information Sequence Read Archive Database and are publically available at BioProject (https://www.ncbi.nlm.nih.gov/bioproject/ PRJNA629007/) (75).
2020-07-07T13:17:40.554Z
2020-07-02T00:00:00.000
{ "year": 2022, "sha1": "25ad999b53138996288921737d0f6aaab748d381", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "80cec17b2bf429834e837074fa523f80ea0bcef4", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
233296997
pes2o/s2orc
v3-fos-license
Computation of the index of some meromorphic functions of degree 3 on tori The index of a meromorphic function $g$ on a compact Riemann surface is an invariant of $g$, which is defined as the number of negative eigenvalues of the differential operator $L:=-{\Delta}-|dG|^2$, where ${\Delta}$ is the Laplacian with respect to a conformal metric $ds^2$ on the Riemann surface, $G \colon M \to S^2$ is the holomorphic map corresponding to $g$. We consider the meromorphic function $w$ on the Riemann surface $M_a= \{(z,w) \in\widehat{\mathbb{C}}^2 \mid w^2=z(z-a)(z+\frac{1}{a})\}(a \geqslant 1 )$ homeomorphic to a torus, and we determine the index of $tw$ for all $a$ in the range $1 \leqslant a \leqslant a_0$ (where $a_0$ can be numerically evaluated) and all $t>0$. Introduction The index of a nonconstant meromorphic function g on a compact Riemann surface is an invariant of g, which is defined as the number of negative eigenvalues of the differential operator L := −∆ − |dG| 2 , where ∆ is the Laplacian with respect to a conformal metric ds 2 = λdζdζ on the Riemann surface, defined by ∆ := 4 λ ∂ 2 ∂ζ∂ζ using a local coordinate ζ, G : M → S 2 is the holomorphic map corresponding to the meromorphic function g and |dG| is the norm of the differential dG of G. The multiplicity of the eigenvalue 0 of L is called the nullity of g and denoted by Nul(g). The operator L depends on how to choose a conformal metric, but the index and the nullity do not depend on how to choose a conformal metric. The index of a meromorphic function is closely related to the index (Morse index) of a complete minimal surface with finite total curvature. Huber [6] and Osserman [11] proved if the total curvature of a complete oriented minimal surface in R 3 is finite, this minimal surface is identified with a Riemann surface given by excluding finite points from a compact Riemann surface, and the Gauss map on this minimal surface is extended to a meromorphic function on the compact Riemann surface. Fischer-Colbrie [3] and Gulliver-Lawson [4], [5] proved that for a complete oriented minimal surface in R 3 , the index is finite if and only if the total curvature is finite. This is a qualitative study of the index. Fischer-Colbrie proved when the total curvature a complete oriented minimal surface in R 3 is finite, the index coincides with the index of the extended Gauss map of this minimal surface. Tysk [12] proved the index of a complete oriented minimal surface in R 3 is bounded from above by some scalar multiple of the total curvature. This is the first quantitative study of the relationship between the index and the total curvature. Study of lower bound of index was done by Choe [1] and Nayatani [9]. Nayatani [10] studied for the index and the nullity of the operator L g associated to any meromorphic function g on a compact Riemann surface M, how they change under a certain deformation g t of g (t is a positive real number). He considered the derivative ℘ ′ of the Weierstrass ℘-function corresponding to the square lattice Z ⊕ iZ as a meromorphic function g, and computed the index of g t when t is sufficiently small and the nullity of g t for all t. In particular, he showed that there are two values t 1 , t 2 (t 1 < t 2 ) of t such that the nullity is 4. Furthermore, he investigated the change of index when t becomes large. He showed that the indices of t 1 g, t 2 g are 5, and since t 2 g is the Gauss map of the Costa surface, he could conclude that the index of the Costa surface is 5. In this paper, we study the index Ind(g) and the nullity Nul(g) of certain nonconstant meromorphic functions g from a compact Riemann surface M to C = C ∪ {∞}. In order to compute Nul(g), we recall the real vector space H(g)(see (2.2)) which was introduced by Ejiri-Kotani [2] and Montiel-Ros [7]. By the formula Nul(g) = 3 + dim R H(g), we can compute Nul(g). If the genus of the Riemann surface M is 1, that is, M is homeomorphic to the torus, H(g) can be defined as follows. where ω is a fixed nonzero holomorphic one-form on M, p 1 , · · · , p µ are the ramification points of g with ramification indices e 1 , · · · , e µ , and D(f ) is the divisor of f , and when P (g) is the polar divisor of g, B(g) is the divisor defined by B(g) = µ i=1 e i p i − 2P (g). Since the definition of H(g) is complicated as it includes the period condition, we intro-duce the complex vector space that is easier to handle, excluding the period condition. H(g) is a real subspace of H(g). As already mentioned, Nayatani [10] computed the index of Costa surface. The compact Riemann surface of the Costa surface is C divided by the square lattice Z ⊕ iZ, which is homeomorphic to a torus, and the Gauss map of Costa surface is a scalar multiple of the derivative ℘ ′ of the Weierstrass ℘-function. C/Z ⊕ iZ is isomorphic to as a Riemann surface, and ℘ ′ coincides with the meromorphic function w except for a scalar multiple. As a generalization of Nayatani's setting, we consider a one-paramenter family of Riemann surfaces homeomorphic to a torus and the meromorphic function w, and we tackled the problem of computing the index and nullity of tw, t > 0. We note that M a is isomorphic, as a Riemann surface, to C divided by the rectangular lattice Z⊕icZ, c > 0. As a result, we are able to determine Ind(tw) for all a in the range 1 a a 0 (where a 0 can be numerically evaluated) and all t > 0. This is the main theorem of this paper. First we determine H(w). Then we determine t > 0 so that the dimension of H(tw) is 1 or more, and find exactly two values t = t 1 (a), t 2 (a) (t 1 (a) < t 2 (a)) for each a in the range 1 a a 0 (where a 0 can be numerically evaluated). Therefore, we can determine Nul(tw). By using the fact that Nul(tw) also changes if Ind(tw) changes when a, t move, we determine Ind(tw) for all a in the range 1 a a 0 (where a 0 can be numerically evaluated) and all t > 0. Theorem 1.1. (Theorem 5.2) If t 1 (a) and t 2 (a) are as described above, then for any a in the range 1 a a 0 ( where a 0 can be numerically evaluated ). This paper is organized as follows. In Section 2, we define the index and nullity of a meromorphic function on a compact Riemann surface. We also recall the vector spaces which are used in the computation of nullity and were introduced by Ejiri-Kotani [2] and Montiel-Ros [7]. In Section 3 we consider a certain family of meromorphic functions g a of degree three defined on Riemann surfaces M a , a 1, homeomorphic to the torus, and describe the above vector spaces in these special cases. In Section 4 we compute the nullity of tg a for all t > 0 and a in the range 1 a a 0 , where a 0 is a constant which can be numerically evaluated. In Section 5 we compute the index of tg a for all t and a in the same range. In this section, we define the index and nullity of a meromorphic function on a compact Riemann surface. We also recall the vector spaces which are used in the computation of nullity. Let M be a Riemann surface, and g be a nonconstant meromorphic function from a compact Riemann surface M to C. We fix a conformal metric ds 2 = λdzdz, where λ is a positive function on M, and consider the differential operator L = −∆ − |dG| 2 . Here, ∂z∂z is the Laplace-Beltrami operator of ds 2 . is the holomorphic map corresponding to the meromorphic function g, where S 2 is the unit sphere of R 3 . |dG| 2 is the square of the norm of dG with respect to the metric ds 2 , that is, Actually, L = φL, φ = λ λ > 0 for two conformal metrics ds 2 = λdzdz, ds 2 = λdzdz. (Note that φ is globally defined on M.) Therefore, Lu = 0 if and only if Lu = 0. We now define the index and the nullity of g. Definition 2.1. We define the index Ind(g) of the meromorphic function g as the number of negative eigenvalues (counted with multiplicities) of the operator L, and define the nullity Nul(g) of g as The nullity Nul(g) does not depend on the choice of conformal metric ds 2 by Claim 1. The index Ind(g) also does not depend on the choice of ds 2 by the following discussion. The bilinear form associated with L is represented by Q : Q is symmetric. Actually, if we let ds 2 = λ(dx 2 + dy 2 ), we obtain Since λ is not included in the rightmost side of (2.1), Q does not depend on the choice of conformal metric ds 2 on M. Let the eigenspace V λ corresponding to the eigenvalue λ of L be V λ = {u ∈ C ∞ (M) | Lu = λu}, µ 1 < µ 2 < · · · < µ k < 0 be the set of all negative eigenvalues of L, Actually, for any u ∈ V , u can be written as Therefore, when u = 0, Q(u, u) < 0. In fact, one can show that V is a maximal subspace of C ∞ (M) on which Q is negative definite. Thus, the index Ind(g) coincides with the dimension of a maximal subspace of C ∞ (M) on which Q is negative definite. Hence, by Remark 1, Ind(g) does not depend on the choice of conformal metric ds 2 . Let ds 2 S 2 = 4 (1+|z| 2 ) 2 dzdz be the standard Riemannian metric on C. Let ζ be a local holomorphic coordinate on M. The pull-back ds 2 g of ds 2 S 2 by g can be written as ∂ζ∂ζ be the Laplacian of ds 2 g . If L corresponding to this ds 2 g is represented by L g , L g = −∆ g − 2. Since λ = 0 at the ramification points of g, ds 2 g = 0 at the ramification points. Although ds 2 is not strictly a conformal metric, one can show that Ind(g) can be computed as the number of negative eigenvalues (counted with multiplicities) of L g and Nul(g) can be computed as the multiplicity of the eigenvalue 0 of L g . In other words, we have the following. Lemma 2.2. Ind(g) can be computed as the number of eigenvalues (counted with multiplicities) of −∆ g which are smaller than 2. Nul(g) can be computed as the multiplicity of the eigenvalue 2 of −∆ g . Remark 2. Let M ′ be a complete oriented minimal surface in R 3 and ds 2 be the first fundamental form on M ′ . The operator L corresponding to ds 2 becomes the Jacobi operator L = −∆ + 2K. Here, ∆ is the Laplacian corresponding to ds 2 and K is the Gaussian curvature of ds 2 . Then the (Morse) index of M ′ is defined as . Furthermore, Fischer-Colbrie [3] proved that when the total curvature of M ′ is finite, the index of M ′ coincides with the index of the extended Gauss map of M ′ . Example 1. The index of the catenoid is 1. In fact, the catenoid is identified with C − {0} as a Riemann surface, and its extended Gauss map is the meromorphic function g(z) = z on C. Therefore, ds 2 g = ds 2 S 2 , the standard metric of the unit sphere. Thus, Ind(g) coincides with the number of eigenvalues (counted with multiplicities) of −∆ S 2 which are smaller than 2. Since 0 is the only such eigenvalue and has multiplicity 1, we conclude that Ind(g) = 1. Thus, the index of the catenoid is 1 by Remark 2. Proof. We define L(g) as Therefore, a · G ∈ N(g). Thus, L(g) ⊂ N(g). The dimension of L(g) is three. In fact, if this is not true, we have a linear relation a 1 G 1 + a 2 G 2 + a 3 G 3 = 0, and this means that the image of G lies in a great circle of S 2 . But this implies G is a constant map (and g is a constant function) as it is holomorphic. This contradicts the assumption that g is nonconstant. Therefore, dim L(g) = 3 and dim N(g) 3. This completes the proof of the proposition. As mentioned in the above proof, L(g) ⊂ N(g), and Nul(g) > 3 if and only if N(g) \ L(g) = ∅. In order to compute Nul(g), we recall the work of Ejiri-Kotani [2] and Montiel-Ros [7]. They observed that an element of N(g) \ L(g) appears in the following way. and Montiel-Ros [7]). Let g : M → C be a nonconstant meromorphic function on a compact Riemann surface M, and G : M → S 2 be the holomorphic map corresponding to g. Let X : M ′ = M \ {p 1 , · · · , p µ } → R 3 be a complete branched minimal immersion of finite total curvature whose extended Gauss map is g and whose ends are all planer. Then u = X, G : M ′ → R, the support function of X, extends to M smoothly and gives an element of N(g) \ L(g). On the contrary, Ejiri-Kotani [2] and Montiel-Ros [7] proved that any element of N(g) \ L(g) appeared as the support function of a complete branched minimal surface with planer ends. Theorem 2.6 (Ejiri-Kotani [2] and Montiel-Ros [7]). For any u ∈ N(g) L(g), there exists a complete branched minimal immersion X : M ′ = M {p 1 , · · · , p µ } → R 3 whose ends are all planer and whose extended Gauss map coincides with g such that u = X, G on M, where G : M → S 2 is the holomorphic map corresponding to g. In terms of the Weierstrass representation formula, this assertion is stated as follows. Let H(g) be the real vector space defined by where K(M) is the canonical divisor of M, p i are the ramification points of g with ramification indices e i , P (g) is the polar divisor of g and B(g) is the divisor defined by In particular, we have the linear isomorphism N(G)/L(G) ∼ = H(g). Corollary 2.7 (Ejiri-Kotani [2] and Montiel-Ros [7]). Nul(g) can be computed from the dimension of H(g) by the following formula : The complex vector space H(g) defined as follows plays an auxiliary role in the computation of H(g). Setting In this section, we consider a certain family of meromorphic functions of degree three defined on Riemann surfaces homeomorphic to the torus, and describe the above vector spaces in these special cases. Torus We first define the Riemann surfaces. Let If (r 1 , θ 1 ), (r 2 , θ 2 ), (r 3 , θ 3 ) are the polar coordinates centered at 0, a, − 1 a , then z ∈ C is represented in three ways as Define the two branches w 1 , w 2 of w by Prepare two copies of the Riemann spheres C, let them be C 1 and C 2 , respectively, and consider w 1 as a function on C 1 and w 2 as a function on C 2 . Put in a slit in the half line connecting z = a and z = ∞ on C 1 , and let the upper part (of the slit) be l 1 and the lower part be l 1 . Put in a slit in the half line connecting z = a and z = ∞ on C 2 , and let the upper part be l 2 and the lower part be l 2 . Put in a slit in the line segment connecting z = 0 and z = − 1 a on C 1 , and let the upper part be h 1 and the lower part be h 1 . Put in a slit in the line segment connecting z = 0 and z = − 1 a on C 2 , and let the upper part be h 2 and the lower part be h 2 . If l 1 is attached to l 2 , h 1 is attached to h 2 , h 1 is attached to h 2 and l 1 is attached to l 2 so that w 1 and w 2 are continuously connected, a Riemann surface M ′ a homeomorphic to the torus can be obtained. Let z 1 ∈ C 1 , z 2 ∈ C 2 , and define the map φ : M ′ a → M a by z 1 −→ (z 1 , w 1 (z 1 )), z 2 −→ (z 2 , w 2 (z 2 )). Let z 1 ∈ l 1 ⊂ C 1 and z 2 ∈ l 2 ⊂ C 2 , and suppose z 1 = z 2 in M ′ a . Then φ(z 1 ) = φ(z 2 ) since w 1 (z 1 ) = w 2 (z 2 ). The same holds at the other slits. Therefore, the map φ is well-defined. It is confirmed that φ is bijective. Identify M a with M ′ a by this bijection, and consider M a as a Riemann surface. Vector spaces H(w) and H(w) In this subsection, we describe the vector spaces, reviewed in the section 2, when the mermorphic function is w : M a → C. w : M a ∋ (z, w) −→ w ∈ C is a meromorphic function of degree 3 on M a , has (∞, ∞) as a pole of order 3, and has (0, 0), (a, 0), (− 1 a , 0) as zeros of order 1, respectively. dw is a meromorphic differential, has (∞, ∞) as a pole of order 4, and has (A 1 , ±B 1 ), (A 2 , ±B 2 ) as zeros of order 1, respectively. Here, dz w has neither zero nor pole everywhere. Using the nowehre vanishing holomorphic differential dz w , H(w) can also be written as follows. where D(f ) is the divisor of f . Similarly, H(w) can also be written as follows. Computation of Nul(tw) In this section we compute the dimension of H(tw) and, as a consequence, compute the nullity of tw. First, we find a basis of H(w). To do this, we first compute the integrals of η i , wη i , w 2 η i , i = 1, 2, 3, on α 1 , α 2 . When we actually compute, we change α 1 , α 2 and we compute along the closed intervals [0, a], [− 1 a , 0] on the real axis, respectively. Since the denominator of η i , i = 1, 2, 3, has z − A 1 , z − A 2 and these integrals diverge at z = A 1 , A 2 , we subtract from η i the differentials of meromorphic functions f i with poles at most order one at (z, w) = (A 1 , B 1 ), (A 2 , B 2 ) on the Riemann surface M a so that the integrals of the meromorphic differentials η i − df i converge. Remark 3. By using Mathematica, we can check that t 1 (a) < t 2 (a) for small values of a. In fact, as the graphs of Figure 1 suggest, the constant a 0 in the statement of Lemma 4.4 is surely larger than 5. On the other hand, the graphs of Figure 2 suggest that the values of t 1 (a) and t 2 (a) become close to each other rather quickly as the parameter a becomes bigger. Computation of Ind(tw) In this section we compute the index of tg a for all t and a in the range 1 a a 0 , where a 0 is as in Section 4. As mentioned in Introduction, Nayatani [10] computed the index and nullity of t℘ ′ , where ℘ is the Weierstrass ℘-function corresponding to the square lattice Z ⊕ iZ. The Riemann surface C/Z ⊕ iZ is isomorphic to and ℘ ′ coincides with w : M 1 → C up to a multiplicative positive real constant. Since we use Nayatani's result in the proof of Theorem 5.2, we state his result in our setting. Theorem 5.1 (Nayatani [10]). For the meromorphic function w : for any a in the range 1 a a 0 ( where a 0 can be numerically evaluated ). Proof. Let g = tw. We consider at t = t 1 (a). What we already know is Nul(g) = 4, Ind(g) = 5 when t = t 1 (1). That is, there are exactly 5 eigenvalues smaller than 2 of −∆ g . If a moves in the range 1 a a 0 , then Nul(g) = 4 for all a in this range. We arrange the eigenvalues of −∆ g from the smallest, and we write the i-th eigenvalue as λ i (a), i = 1, 2, · · · . When a moves from 1 to a 0 , λ i (a) changes continuously, so if Ind(g) changes, then Nul(g) also changes. Therefore Ind(g) = 5 does not change. When t = t 2 (a), Ind(g) can also be determined similarly. We close this section with concluding remarks. Remark 4. (i) Our argument for the proof of Theorem 5.2 is diffrent from Nayatani's one in [10]. Nayatani first showed Nul(tw) = 3 and Ind(tw) = 5 when t is sufficiently small, showed Nul(tw) = 4 when t = t 1 (1), t 2 (1) and Nul(tw) = 3 when t = t 1 (1), t 2 (1). It follows that Ind(tw) = 5 since Nul(tw) = 3 does not change when 0 < t < t 1 (1). Next, he showed that one eigenvalue lager than 2 become smaller than 2 when t pass through t 1 (1). Therefore, he could show Ind(tw) = 5 when t = t 1 (1) andInd(tw) = 6 when t = t 1 (1) < t < t 2 (1). Similar argument determines Ind(tw) for t ≥ t 2 (1). We use this result to prove Theorem 5.2. (ii) Since H(tw) = {0} for t = t 1 (a), t 2 (a), there exists a (possibly branched) complete orientable minimal surface in R 3 whose extended Gauss map is tw and all of whose ends are planer for each of t = t 1 (a), t 2 (a). In particular, the Morse indices of these minimal surfaces are both 5. If t = t 1 (a), t 2 (a), tw is still the extended minimal surface of some complete orientable minimal surfaces in R 3 , and Theorem 5.2 computes the Morse indices of these minimal surfaces. (iii) In the case that the Riemann surface has genus zero, it is a remarkable result of Ejiri-Kotani [2] and Montiel-Ros [7] that the index of a generic meromorphic function of degree d has index 2d − 1. On the other hand, in the higher-genus case, there are not so many complete orientable minimal surfaces nor meromorphic functions whose indices are computed. Theorem 5.2 should be of some interest as it provides new examples of meromorphic functions on compact Riemann surfaces of genus 1 whose indices are computable. Appendix We record computations omitted in Section 4 here. We compute the integrals of η i , wη i , w 2 η i , i = 1, 2, 3, on α 1 , α 2 . As before, we compute along the closed intervals [0, a], [− 1 a , 0] on the real axis, respectively. Also, we subtract from η i differentials of meromorphic functions f i on the Riemann surface M a so that the integrals of the meromorphic differentials η i − df i converge. First we compute (ii) The integrals of w 2 η 1 on α 1 .
2021-04-20T01:15:53.553Z
2021-04-19T00:00:00.000
{ "year": 2021, "sha1": "8ea1dbc3642e2d51f74d14e22311fc74c45a0d9b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "8ea1dbc3642e2d51f74d14e22311fc74c45a0d9b", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
221397097
pes2o/s2orc
v3-fos-license
Realizing highly entangled states in asymmetrically coupled three NV centers at room temperature Despite numerous efforts the coupling between randomly arranged multi-NV centers and also resonators has not been improved significantly mainly due to our limited knowledge of their entanglement times (2t_ent). Here, we demonstrate a very strong coupling between three-NV centers by using a simulated triple electron-electron resonance experiment based on a new quantum (U_C) gate on IBM quantum simulator with 2t_ent ~12.5 microsecond arranged is a triangular configuration. Interestingly through breaking the symmetry of couplings an even lower 2t_ent ~6.3 {\mu}s can be achieved. This simulation not only explains the luminescence spectra in recently observed three-NV centers [Haruyama, Nat. Commun. 2019] but also shows a large improvement of the entanglement in artificially created structures through a cyclic redistribution of couplings. Realistically disordered coupling configurations of NV centers qubits with short time periods and high (0.89-0.99) fidelity of states clearly demonstrate possibility of accurate quantum registers operated at room temperature. Realizing highly entangled states in asymmetrically coupled three NV centers at room temperature Nano-Scale Transport Physics Laboratory, School of Physics, University of the Witwatersrand, Private Bag 3, WITS 2050, Johannesburg, South Africa Despite numerous efforts the coupling between randomly arranged multi-NV centers and also resonators has not been improved significantly mainly due to our limited knowledge of their entanglement times (2ent). Here, we demonstrate a very strong coupling between three-NV centers by using a simulated triple electron-electron resonance experiment based on a new quantum (UC) gate on IBM quantum simulator with 2ent ~12.5 s arranged is a triangular configuration. Interestingly through breaking the symmetry of couplings an even lower 2ent ~6.3 can be achieved. This simulation not only explains the luminescence spectra in recently observed three-NV centers [Haruyama, Nat. Commun. 2019] but also shows a large improvement of the entanglement in artificially created structures through a cyclic redistribution of couplings. Realistically disordered coupling configurations of NV centers qubits with short time periods and high (0.89-0.99) fidelity of states clearly demonstrate possibility of accurate quantum registers operated at room temperature. The concept of simulating quantum mechanical systems more efficiently on quantum computers than on classical computers has become far more realisable in recent years with the development of quantum registers consisting of up to tens of superconducting qubits [1]. This has already enabled the effective simulations of the dynamics of many-body quantum mechanical systems [2,3], condensed matter physics [4,5,6], high-energy physics [7,8] and quantum chemistry [9,10]. Simulating complex quantum mechanical systems requires complex quantum circuits consisting of more quantum gates. Such circuits take a longer time to operate on qubits, so ideally the coherence time of the quantum register should be much longer than the operating time of the quantum circuit since the accuracy of quantum simulation depends on the coherence time of the qubits [11]. Currently, a single flux qubit can have coherence times up to 0.5 ms [12], while a spin qubit consisting of a divacancy in silicon carbide can reach 1.3 ms [13]. A silicon-vacancy spin qubit can demonstrate coherence times of up to 13 ms and spin relaxation times up to 1 s below 500 mK [14], however a quantum computer that requires such low temperatures is a very limiting factor -ideally the most practical quantum computer should be able to operate at room temperature [15]. NV centers in diamond show the most promise in this regard since they have demonstrated the longest coherence times at room temperature compared to other defects in diamond with a natural abundance (1.1%) of 13 C -0.7 ms. This coherence time can be improved to 1.8 -2.0 ms by suppressing impurities and defects [16]. This makes NV centers a good candidate for spin qubits in a quantum computer (theoretically) at room temperature that could provide a distinct advantage over superconducting flux qubits [17]. However, in practice, a three NV center quantum register has not been a feasibly achievable option due to the difficulty of fabricating three coupled NV centers in diamond. Recently, Haruyama et al. have claimed the synthesis and analysis of three coupled NV centers using implantation of C5N4Hn ions from an adenine source to scale up the creation of NV centers in diamond [18]. Not all NV centers that were close enough (in the order of 10 nm) were coupled though only one group of three NV centers was classified as strongly coupled, and this triplet group was further discussed which needs further theoretical support [18]. The entanglement of quantum states is essential to the formalism of quantum theory [19], and the entanglement of qubits in a quantum register is fundamental to the operation of quantum circuits that can theoretically outperform classical computers when it comes to quantum simulation [20]. Entanglement of two NV centers at room temperature has been demonstrated [17], but until recently the large-scale creation of three coupled NV centers in diamond has proven difficult [18]. Here, we show the performance of newly developed quantum gates that enables quantum simulation of the entanglement of first two, and then three such coupled NV centers using a double and triple electron resonance pulse sequence [21,22]. This operation not only explain the experimental data in Ref. 18 but also demonstrate a very short entanglement time in an ordered configuration. A triangular configuration of NV centers is important for creating an extended lattice structure as used in superconducting qubits. This will enable simulations of many body interactions in the presence of disorder or unequal coupling between qubits or spin centers [2][3][4][5][6]. Therefore, we extend the work beyond ordered configurations of NV centers by breaking the symmetry to produce strong luminescence (resonance) features. The coupling strength between any two NV center defects depends on the size of the quantum dots as well as the distance between them [23]. Like the resonance spectra of quantum dots the geometric configurations of the NV centers in relation to one another (such as vertical, horizontal or triangular) would affect the luminescence if the coupling between them are varied [24]. Simulating the entanglement of an idealised configuration of three coupled NV centers has value, but the demonstration of entanglement in a non-ideal system is more relevant with respect to the physical realisation of a three NV center quantum register, as physically realised systems are rarely ideal. Considering this, the distance between NV centers and the geometry of the configuration is taken to resemble that of a triangle in terms of the coupling between the NV centers, as these configurations are the most disordered. The parameter under investigation here is simply coupling strength for different ordered and disordered configurations of the NV centers. The asymmetry of a given configuration is represented by the use of a coupling constant (R) defined by where , and are the coupling strengths between the three NV centers as shown in Figure 1. In particular, three extreme configurations are examined -equilateral representation with equal coupling strengths between each NV center ( = = ), isosceles representation with equal coupling strengths between two of the three NV centers ( = ≠ ) for , , { , , }, and scalene representation with different coupling strengths between each NV center ( ≠ ≠ ). Since the a) Electronic mail: somnath.bhattacharyya@wits.co.za triple electron resonance scheme developed here to entangle three coupled NV centers requires the decoupling of one pair of NV centers in the final free evolution period, the rotation of each of these configurations is also investigated. This is achieved by a cyclic redistribution of the coupling strengths of each coupled NV center pair. To demonstrate the physical importance of these simulations, the scalene configuration is investigated by using the coupling strengths found by Haruyama et al. [18]. A single negatively charged NV center defect (Figure 1.a.) consists of six electrons existing in a spin-triplet state with the energy level splitting = 0 and = ±1 controlled by external magnetic fields, shown in Figure 1.b. This allows the useful formation of a two-level = 0 and = 1 system that can act as a single qubit. The NV center defect interacts with the nuclei of surrounding nitrogen and 13 C atoms in the diamond. This is a major source of noise that creates decoherence in NV centers [25], but for 12 C enriched diamond, electron spin coherence time can reach 3.0 ms [17]. Even with the more conservative coherence time of 2.0 ms, NV center electron spin qubits have long enough coherence times for accurate execution of fairly complex quantum circuits. The double and triple electron resonance pulse schemes include a free evolution of the system of NV centers for a time of the order of 10 2 s, so the noise from electron-nuclear spin interactions is negligible. This means that the electron-nuclear hyperfine coupling terms are excluded from the secular Hamiltonian describing the coupling between three NV center electron spins. The interaction between electron spins of different NV centers can be simulated in a similar way as the interaction between nuclear and electron spin in one NV centers [26]. In a coupled NV center triplet (comprised of , and ), each NV center is coupled to the other two NV centers, as shown in Figure 1.c). Figure 1. a) Visual representation of a negative NV center defect in diamond. b) Energy level transition of the NV center electron spin that can be controlled by an external magnetic field. c) Three strongly coupled NV centers. and are the most strongly coupled, while and are the least strongly coupled [18]. d) Effective combined energy level scheme of the three coupled NV centers, where each NV center forms the two-level = 0 and = 1 system. Spin transitions represented by solid lines can be driven with microwaves due to the different Zeeman shift caused by different magnetic field alignments for each NV center [17]. Transitions represented by dashed lines are driven by the dipolar coupling between NV centers. Using the secular approximation, the system in a magnetic field B is described by where ∆ = 2.87 is the zero-field splitting [27], = 2.8 / is the gyromagnetic ratio [28] and ̂ is the spin operator for i∈{A,B,C}. The last term is the dipolar coupling term between the three NV centers, given by: The dipolar coupling between NV center spins is between centers A and C, between centers C and B and between centers A and B. The spin-flip terms ̂ ̂+ ̂ ̂ for i ∈ {A,B,C}, j ∈ {B,C,A} are ignored because the dipolar coupling is smaller than the energetic detuning between any two spins [17]. is considered when simulating the coupling between NV centers in the absence of an external magnetic field, with spin-flip transitions of individual NV centers driven by microwave pulses as shown in Figure 1.d. In general, the circuit representing the free evolution of two coupled NV centers (i and j) is found by representing the two-qubit gate = 2 ( ⊗ ) with an (2 ) gate on qubit and a gate, with Hadamard gates initially applied to each qubit to allow for interaction. 2 is the free evolution time -the time period over which the NV centers evolve freely, and is the dipolar coupling between the NV centers. Using the gate, a quantum circuit is developed to simulate a DEER experiment for each coupled pair in a system of three coupled NV centers -, and with dipolar coupling = 4.6 , = 53 and = 24.1 . The results of the simulated DEER experiment are shown in Figure 2. Here, the simulated luminescence of each NV center for a given 2 in a DEER experiment is shown on the vertical axis by the normalised counts: the proportion of measurements that resulted in the labelled state out of a total of 1024 measurements for each 2 . The normalised counts are based off the ground state, as the excited state changes depending on which sensor-emitter pair is being simulated. Figure 2 shows the oscillation between the ground state and the excited state that would be physically observed in a real DEER experiment [17,18]. The dipolar coupling between any two NV centers is a direct measurement of the period of oscillation of each of the plotted lines in Figure 2, where normalised counts are plotted against instead of 2 to compare with the resuts found by Haruyama et al. The DEER plots are simulated reproductions of the intensity plots that Haruyama et al. used to show the fabrication of three strongly coupled NV centers [18]. By modifying the DEER quantum circuit for two coupled NV centers, a new entanglement circuit is developed (see Methodology). The entanglement of two coupled NV centers with dipolar coupling = 4.93 is analysed by measuring the normalised counts of all possible states of the two NV center qubits over increasing free evolution times. The evolution time required for entanglement is defined as 2 . From Figure 2, it can be seen that the two qubits reach the entangled Bell state [17]. A triple electron resonance scheme [21,22] was used to entangle three NV centers, with each NV center acting as the sensor and emitter for the other two NV centers. The linear combination of the states of all three NV centers is considered as the overall state of the system. The decoupling of and [30] for the final period of free interaction just before measurement allows the entanglement of the three NV centers. To properly analyse the entanglement, the plots of the relevant states are measured for each case. The coupling between three NV centers results in each NV center qubit oscillating between the ground |0⟩ and excited |1⟩ state according to the linear combination of the oscillation caused by the individual coupling of the NV center with each of the other two NV centers. The proportional occupation (normalised counts) of each state of the system follows similar linear combinations of oscillations caused by the coupling between NV centers. The fidelity of the system over time is calculated by = 2 ( ) where is the density matrix of the target entangled state and is the density matrix of the measured state for increasing 2 . Fidelity is also plotted over increasing 2 as a means of further justifying the time at which the system reaches the entangled state, as the maximum fidelity shows when the system becomes closest to the entangled state. The real part of the density matrix of the system at 2 provides a visual representation of how close the system comes to the entangled state. We simulated 27 different configurations of the three coupled NV centers, the 12 important results of which are summarised in Table 1 [29]. The Normalised Counts represent the relative intensity of light measured. The plotted results of simulated DEER experiments on three coupled NV centers with varying dipolar coupling strengths using a three qubit register of c) the IBM Quasm Simulator and d) the IBM London Quantum Emulator [29]. The Normalised Counts represent the relative intensity of light measured. The first configuration of three coupled NV centers is the symmetrical case, with an equilateral (type) representation of NV centers where = = , inset in Figure 3.b. The entangled state reached for this configuration is 1 √2 (|000⟩ − |111⟩) as shown by the density matrix in Figure 3.b. The |000⟩ and |111⟩ states are plotted to show how the system evolves over increasing 2 . Three different coupling strengths are analysed for this equilateral representation. The shape of the evolution of the |000⟩ and |111⟩ states for each of these different coupling strengths is the same (Figure 3.a), suggesting that there could be a consistent evolution of the system for the equilateral representation. Importantly, 2 is shorter for strong coupling and longer for weak coupling. The second configuration of coupled NV centers is the isosceles (type) representation with twelve different arrangements of coupling strength classified as either single dominant ( = < ) or double dominant ( = > ) with , , iterating through , , . Table 1 shows that when ≠ = , 2 is shorter than other arrangements, with the exception for the case where ≪ = . As expected, there is a symmetry in different configurations with the same coupling strength for : interchanging the coupling strengths of and results in the same 2 , albeit with two different states being entangled. Interestingly, configurations with a larger difference between maximum and minimum coupling strengths reached an entangled state after a shorter 2 . Double dominant configurations generally have shorter 2 , except for when ≫ = , which provides an interesting anomaly that is further discussed. The evolution of the |100⟩ and |011⟩ states as well as the fidelity to this entangled state is shown in Figure 3.a. The maximum fidelity of 0.963, as well as the point at which the normalised counts for the |100⟩ state equals the normalised counts of the |011⟩ state, occurs at 6.3 s. After a free evolution time of 6.3 s, the configuration with = =≪ (Figure 3.a. inset) became very close to the entangled state 1 √2 (|100⟩ + |011⟩), shown in Figure 3.d. This shows promise for the entanglement of NV center spin qubits with a more disordered configuration. The third configuration of coupled NV centers is the scalene (type) representation, with six different arrangements of ≠ ≠ , with , , iterating through , , (Figure 4.b. inset). The 2 values are longer than for the previous two configurations, which is expected as this is the most distorted system that has been simulated here. Each coupled NV center pair was taken to have a different coupling strength than the other two coupling pairs, with coupling strength taking values of 5 , 20 or 50 . Some interesting similarities in entanglement times were observed, namely that the system reached different entangled states in 2 = 62.8 s when = 5 , and in 2 = 125.7 s otherwise. The evolution of the relevant states for the interesting configurations are shown over increasing 2 in Figure 4.a and 4.c, with the density matrix of the entangled states reached shown in Figure 4.b and 4.d. From these figures, it can be seen that interchanging the coupling strengths and changes the entangled state reached in the same way as interchanging the first two qubits in the quantum circuit. A realistic case of the scalene representation uses the measured values of the coupling strengths between three coupled NV centers as found by Haruyama et al. [18]. As before, there are six possible arrangements of this set of coupling strengths, the results of three of which are summarised in Table 1. With all arrangements of these coupling strengths, the entangled state , the entangled Greenberger-Horne-Zeilinger (GHZ) state was reached at 2 = 11.7 s, as shown in Figure 5. Importantly, the longest and shortest 2 for this disordered configuration is comparable to the longest and shortest 2 for both the weakly coupled equilateral and single dominant isosceles representations. This implies that disordered configurations are useful for the development of an NV center quantum register. Interchanging the coupling and changes the entangled GHZ state from state with 2 = 0, however different entangled states can be achieved with relatively high fidelity for a free evolution time of 2 = / for some integer where is the strongest coupling in the configuration. For equilateral representations, a maximally entangled state was reached for = 2 with a fidelity of 0.996. For the isosceles representation, the evolution of the system is dominated by the more strongly coupled NV centers, so the three qubits can reach an entangled state for = 2 as with the equilateral representation, but with notably lower fidelity. The effect of one pair of NV centers having a different coupling strength results in variations in 2 , with the longest case being = 10. The shortest case ( = 1) was due to the strongly coupled pair being decoupled for the final free evolution period, thus decreasing decoherence for short 2 . The scalene representation consistently reached an entangled state for = 10 and = 11 with relatively high fidelity ranging from 0.897 to 0.990 for both the idealised and realistic cases respectively, however shorter 2 were observed. In the idealistic scalene representation, the maximally entangled state was achieved at 62.8 s with the decoupled NV center pair = 5 , but for the configuration with the decoupled NV center pair = 20 , the system closely resembled the entangled 1 √2 (|000⟩ + |111⟩) state. The fidelity of this state was 0.829, which is not high enough to be considered maximally entangled, but it does point to the potential for disordered systems to reach entangled states at shorter free evolution times. This was seen in the realistic case, where the decoupled pair had a coupling strength of = 24.1 . The short 2 (11.7 s) arises from the disorder of the system resulting in a higher frequency of random alignment of the three qubits. Additionally, the coupled pair with = 24.1 was decoupled during the final period of free evolution, so the stronger coupling ( ) = 53 dominates the interaction for short 2 , and the weak coupling ( ) = 4.6 acts to decrease decoherence of the system for short 2 . These factors can allow realistically disordered configurations of coupled NV centers to reach an entangled state at short 2 . The effective simulation of DEER experiments on coupled NV centers along with previous work simulating hybrid quantum systems and entanglement on a quantum computer [30], demonstrates how simulations performed on IBM Quantum Experience can be used to explore the potential for real NV center spin qubits that can be used in a convenient, accurate quantum computer. Importantly, the free evolution of coupled NV centers can be used to transform multiple NV center spin qubits to desired states. In this case, the desired state was the entangled Bell state, which was achieved at 2 = 21.2 s for two NV centers with dipolar coupling of = 4.93 . Since the physical configurations of three coupled NV centers influences the disorder of the system the shortest 2 for three coupled NV centers was different for each of three different configurations: 12.5 s for equilateral representation, 6.3 s for isosceles representation and 11.7 s for scalene representation. Shorter 2 were observed for more ordered systems with a higher coupling constant . The simulations in this work suggest that the entanglement between three strongly coupled NV centers can be achieved for varying symmetric and asymmetric configurations in under 20 s, which is much shorter than the coherence time of the individual NV center qubits. The coupling constant (1) provides a way to quantify the order of the system comprised of three coupled NV centers, as well as the order in any pair of coupled NV centers. By neglecting data points with fidelity below 0.9, a trend can be seen in the relationship between the free evolution times required for entanglement and the coupling constant of the system. Figure 3.a clearly shows how the 2 decays as the order ( ) of the system increases. This trend is most evident when considering all three coupled NV centers instead the case one of the coupled pairs is not considered. This implies that 2 is affected more by the order of the system three NV centers than by the order of each coupled pair. High fidelity entanglement at short free evolution times is therefore favoured by more ordered systems. However, there are some disordered configurations simulated that demonstrate short 2 when simulated, as shown further on in this work. Discussion NV centers created by ion implantation as shown in ref. 18 [2][3][4][5][6]. In general interaction with photons produces sub-radiance and super radiance states where the resonance peak can be tuned with the symmetry of a three-dot configuration. A strong resonance peak relative to nonresonance can be demonstrated by introducing inequalities in interdot couplings through a distribution of the size and distance [23,24]. In some irregular configurations a strong resonant tunnelling can be found from the strong entanglement or a right combination of states as observed by breaking the symmetry of a regular configuration. These special structures would be useful to develop NV center based hybrid quantum devices [31] also by adding squeezed states obtained from strong coupling between a resonator and a couple of NV centers [32]. These closed loop configurations of NV centers can be useful for the development of spin qubits and topological qubits. In summary, the unexpected short 2 for the scalene configuration shows the potential of NV center quantum registers for room temperature operation of quantum computers, because the fabrication of disordered configurations of coupled NV centers is far easier to achieve than the fabrication of ordered systems. Therefore, this is an important step in the development of accessible, convenient and efficient quantum computers that are still accurate. Methodology A double electron-electron resonance (DEER) experiment was simulated to demonstrate the reliability of the = 2 ( ⊗ ) gate. In a DEER experiment, the qubits are both initialised in the |0⟩ state. Applying a /2 pulse transforms the sensor qubit into exactly equal superposition: 1 √ 2 (|0⟩ + |1⟩). The qubits are left to evolve freely under the dipolar coupling of the electron spins for a time 2 . A pulse is then applied to the sensor qubit, before being left to evolve freely again for a time , after which a pulse is applied to the emitter qubit. After another free evolution period of , a /2 pulse is applied to the sensor qubit before measurement. The qubits are allowed to interact for increasing free evolution times 2 [17]. To simulate a DEER experiment, /2 microwave pulses are simulated using Hadamard gates and microwave pulses are simulated by NOT gates [17]. The system is initialised in the |0⟩ state for all NV centers, with a measurement gate as the readout operator. The quantum circuit simulating the microwave pulse sequence between time periods of free evolution is shown below. By applying the second /2 pulse on both qubits directly after the pulse instead of just before measurement ( Figure 6.a), the two qubits can reach the entangled Bell state
2020-09-02T01:01:27.705Z
2020-09-01T00:00:00.000
{ "year": 2021, "sha1": "005bf5754969f5cfed4a1ebb9b483b0509e30b97", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2009.00570", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "0cc5567f22a04fa9de6a497b49d909436d760962", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }